Loading user information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading user information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements


sokhaty sokhaty
  • Cloud Cover Episode 41 - Windows Azure Toolkit for Windows Phone 7

    Good as always.

    So, was Steve making some sort of a point by playing only A major? Azure-Majure?

  • C9 Lectures: Yuri Gurevich - Introduction to Algorithms and Computational Complexity, 2 of n

    It seems that discussion about "sameness" of two or more algorithms has no point unless participating parties agrees on what sameness means. Which it turn requires definition of a meta algorithm to measure various properties of algorithms of interest and compute measure of sameness. So, question "are two algorithm the same" has no aswer in general case.

    For similar reasons, in order to pick a better algorithm, one has to disclose all the constaints and assumptions used to compute measure of "goodness".

  • Programming Streams of Coincidence with Join and GroupJoin for Rx

    Not sure if I get all of it, I guess I have to watch it again.

    It would be very helpful to look at some samples. Also, it looks like the discussion is based on embedded assumption that observations of the events are direct. I wonder how it all change if observations were indirect - for example when you are trying to reconstruct flow of events based on gathering clues. Stream of clues is directly observable (and ordered in time), while pointers to the event stream of interest can be to the past or future times or be just durations without definite starting time points. Would Rx be useful in this case?

  • Accepting Input and Assigning Values from a TextBox - Day 1 - Part 10

    It looks like mystery about lost update to myTextBlock.Text revolves around implementation specifics of event processing in Silverlight on WP7 (or may be Silverlight in general?).
    Apparently, when assignment myTextBox.Text = ""; happens, "on text change event" is being added to the end of the queue of events waiting to be processed. So, queued event will be picked up for porcessing only after myButtonClick event handler completes. And when that happens, all your changes to the myTexBlock.Text will be effectively undone, lost or what have you. This is quite esy to validate by clicking "Clear" button without entering any text immediately after application loads, or clicking it twice in a row after some text was entered. In this case "on text change" event won't be fired (presumably changing text form "" to "" doesn't constitute a change). You'll note that text block won't display any text.

    I guess a real take away from this episode is how easy it is to mess thing up big, whithout understanding what events are and how they are being processed at run time.

  • Expert to Expert: Inside LINQ-to-SharePoint

    Good discussion. Interesting perspective on how customer-extensible application meta-data affect all the layers above.

    And it's rather obvious that "like.. you know" Bart now works for Eric.


    Just out of curiosity, what is so special about SharePoint security that computation has to happen in the business layer as opposed to a stored procedure or even a correlated subquery in a SQL statement?

  • E2E2E: Meijer, Rys and Vick - Programming Data

    Oh, my. So many good discussion topics, so little time. I stopped counting after the first five. More E2E sessions would be nice.

    I'd like to hear more about developments in the area of type systems, relational (im)purity,  and programming languages to replace SQL.

    Cost based optimization and how things are evolving in that area would be interesting too.


    On a side topic, calling SQL relational is a blasphemy. It's as relational as a cow is a noble steed.

  • E2E: Erik Meijer and Patrick Dussud - Inside Garbage Collection

    On a loosely related subject, It would be interesting to know why SQL Server is still being shipped with two VMs - .NET one and a specialized one to run T-SQL code. Is .NET VM too generic (insufficently specialized) to provide good run-time for T-SQL?

    Is it the same or similar reason why guys from Jane Street Capital hint at .NET GC not being quite good enough to meet demands placed on run-time by a functional programming language (F#)?


    It looks logical to assume that any engineering solution (software or otherwise) has its range of applicability. Going below or above the applicable range requires some other engineering solutions. Is there any info out there on applicability limits of .NET GC and .NET VM for that matter (and how one would express those limits anyways - in terms of lattency or memory allocations per unit of measure, when being general purpose VM starts and ends?).

  • E2E: Erik Meijer and Dave Campbell: Data, Databases and the Cloud

    In a database of the future, I'd like to be able to declare my taxonomy, interval (including temporal) properties of objects and their attributes and functions to derive new truths about objects based on the truths obtained so far. Then I might want to specify how much of those computational graphs I want to store precomputed (which would allow me to have OLTP, data warehouses, cubes and whatever funky terms are currently used to descripbe various stages of truth discovery/derivation). Are there any developments in those directions?

    (I guess inclusion of MDM services into 2008 R2 may seem to hit that the answer is "soft of").


    To the subject of the code running closer to the data, this was obvious for quite a while now that shipping bits over network to have a copy in application memory and process it using (more often than not) less sophisticated algorithms than those already present at the point of storage doesn't make much sense. Database is a computational engine, so it should have a powerful language to create computational expressions. Unfortunately SQL in general is not quite that, and T-SQL in particular isn't either.

    Having F# inside database engine would be nice. Add "pure mode" for querying (which should be side effects free) and use imperative mode to produce side effects (data modifications).


  • The .NET4 Countdown Synchronization Primitive

    Yeah, that's a hack and is a bad one too. Why not to make "done adding" semantics explicit at least?

    Then instead of add one before, remove one after there will be explicit call to ce.NoMoreCounterIncrements()

    It doesn't guarantee that programmer won't forget to add it into the code, but that's one change to one code line as opposed to two (and it's a better "pattern" than the alternative).


    Plus, if this primitive constructor accepted nested lambdas, then AddCounter() and NoMoreCounterIncrements() could be hidden from the programmer all together. Just declare what you want to spawn and how many of those in a nested lambda and happily wait for completion.

  • E2E: Erik Meijer and Burton Smith - Concurrency, Parallelism and Programming

    On the subject of strict or linient evaluation.

    It seems that an adanced enough run-time can and should use both, based on the accumulated "knowledge" (stats) about workloads being executed.


    Expectation that something can be strictly evaluated in false in absolute sense, because each and every CPU instruction and/or memory read/write may fail because of faulty hardware. Yet, it can be statistically true. If hardware is somehow known to be 99.something% reliable, such assumption can be made safely (in statistical sense), otherwise nothing can be computed or done ever.

    (I believe that proponents of strict evaluation are stuck because they base their reasoning on incorrect assumptions without explicitly stating what those assumptions are, which is a known issue that plagued physics for centuries, and most likely still does)


    The same must apply to the algorithms as well. If algorithm is known to be predictable on a given workload (either statistically or by devine intervention of the mister human), it's OK to evaluate is strictly. If there is no prior knowledge, lazy evaluation is the way to go and please gather execution stats upon exit so it can be reused in the future evaluations/executions. And if it does not exit in the requested amount of time - abandon (preferrably kill first) the execution and and black list it (till the end of time or the next devine intervention).


    From 10000 feet it looks like a nice logical schema with a feed back loop, which is statistically a necessity for each and every successful eco system (observe the nature).