evildictaitor

Back to Profile: evildictaitor

Comments

  • Brian Beckman: Project Quark - A New Beginning for Quantum Computing Rises from the Ashes of Theoret

    paulcoder wrote:
    

    Thus combining the ideas of computability, physics and group theory is not so outlandish after all! You would probably even get it published in a respectable journal.



    Unifying theory of computability, group theory and physics

    This has already been done. Let's try and get the gist of it down in just a few paragraphs.

    The Church-Turing thesis shows that the set of lambda-calculus solvable algorithms is turing complete and that the set of problems that can be solved by a turing machine is the set of algorithms solvable by lambda calculus.

    And we have the fact that lambda-calculus is "Goedel-complete" (it is an omega-consistent recursive class k of formulae and this there exists infinitely many problems p for which (given v, a free-variable in r) neither v Gen p not neg( v Gen p ) belong to Flg(k), the set of correct solutions to problems in k).

    In brief this means that there are non-computable problems inside lambda-calculus and thus in turing machines, but also (and importantly) there are non-computable problems in mathematics, and turing-unsolvable problems are a subset of problems which cannot be solved in mathematics.

    Couple this with the fact that every law of physics is in fact a transformation from a physical "world view" (observations et. al) to a mathematical rule-system and back-again, typically deviating while in the maths stage to do some mathematical transformations before coming back again (the mathematical transform is referred to as the "rule" or physical "law").

    For instance, given a mass of 1kg being pushed by 1 Newton of force, we transform this (using physics) to the system {m=1kg, a=1N} and use the transform {F = ma} and solve using mathematics to {a = F/m, a = 1m/ss} and then resolve via physics back again to the statement "it will accellerate by one metre per second per second for the duration the force is applied."

    Consequently the problems which cannot be solved by computers is a either equal or a subset of the problems that cannot be solved in physics, which is in turn equal or a subset of the problems that cannot be solved within mathematics (and it is important to note that this is "unfixable"; i.e. we cannot change maths to compensate for this, since it will either be incomplete (we don't know everything) or inconsistent (we've proved some things which don't happen)).

    So there is a unifcation set for you which combines group theory, unsolvability and physics.

    On why there is almost certainly no Theory of Everything (and caveats conditions for its existence)

    Finally we should note that if a theory of everything T existed then we would be able to ask it any question in physics and it must finitely terminate with the correct answer as to what will happen. But if this theory T is to exist within science it must be rigourous, and thus must be itself a subset of mathematics. Given a statement p inside the axiomatic-world-view of physics we then know (given that T always finitely terminates with the correct result) that for a free-variable v in  T then v Gen p lies inside the set of provable statements in T, the so-called Flg(T). But I know from Goedel's theorem that there exists a statement x inside T for which neither v Gen x nor neg(v Gen x) exists inside Flg(T) and so consequently choosing p = x we have a contradiction.

    It therefore follows that either one of the following is true:
    • A theory of everything exists, but it cannot be written in mathematics. At which point it stops being science and becomes philosophy or theology.
    • A theory of everything exists, but for some problems it simply returns the wrong result.
    • A theory of everything exists, but for some problems it will never return a result. And equally importantly given Turing's halting problem you won't be able to tell before doing the computation that it will never halt. Consequently the theory ceases to be a theory of "everything" in the sense that we know it.
    • A theory of everything exists, but it can't tell you everything about "everything", only somethings about everything (or everything about something), and thus isn't really a theory of everything.
    • A theory of everything simply does not exist
  • Brian Beckman: Project Quark - A New Beginning for Quantum Computing Rises from the Ashes of Theoret

    KevinB wrote:
    

    Yeah, the 30 minutes was an achievement, but I must admit this stretched my April Fool's good humour to the limit, 30 minutes of listening to the WMA and then it is an April Fool's. Funny, which is why I didn't do the first post warning others it was an April Fool's, but just a little too long and too much time wasted for comfort...



    It's not time wasted. It's a great discussion on the history of physics (everything up to and including the existance of the set E8 is true - whether E8 unifies anything in physics is spurious at best and everything after that is New Scientist article lies).

    And if you didn't cotton-on to the fact that it's an april fool joke by the time Brian says they "finished" theoretical physics then shame on you Tongue Out

    This all being said, given that quantum computing actually exists (stop laughing) I wonder if Microsoft does actually have any active interests in that area.
  • Expert to Expert: Erik Meijer and Bertrand Meyer - Objects, Contracts, Concurrency, Sleeping Barbers

    dcharles wrote:
    Truely old school, he wrote his algorithm on paper! how many programmers do that today? 


    Me for one. Excepting trivial functions, most of what I do requires me to think, and I can't do that without a bit of paper and a pencil.
  • Expert to Expert: Erik Meijer and Bertrand Meyer - Objects, Contracts, Concurrency, Sleeping Barbers

    jason818_253.33 wrote:
    

    Regarding the Sleeping Barber Algorithm and with out breaking the demonstration. Wouldn’t you want to have as many barbers as you have chairs?


    CRPietschmann wrote:
    This analogy does make the problem sound simple.


    You've both misunderstood the crux of the Sleeping Barber problem. The problem is a problem not of speed or efficiency, but one of concurrency. The point is to serve the customers without experiencing process- (customer-) starvation or deadlock and the Sleeping Barber problem is a double-rendevous problem requring a more careful use of locks to achieve the correct result than you might think at first.
  • Stephan T. Lavavej: Digging into C++ Technical Report 1 (TR1)

    Wizarr wrote:
    

    Maybe have a new keyword called "spnew" that replaces or adds to the "new" keyword where it automatically handles where it gets allocated from.  Then the type can be infered by the spnew operation so that is bypasses the gc but add attributes to tell the gc the scope for deletion.  I havent really looked too hard at how best you can describe this feature but I think you can have them coexists.



    I think you should have a look at managed C++ - we have two keywords: new works the same as in normal C++ and gcnew tells the garbage collector that it is responsible for cleaning up the object. This allows you to use the sharedpointer semantics for C++ types, and use the garbage collector for the .NET components.

    Wizarr wrote:
    

    If we can make dotnet more honest, I think we will have more acceptance to things like Haskell and other functional programming languages, while at the same time, being able to have a hybrid version that can handle the problems of Haskell, such as passing around state and how it can handle it.



    I think we're abusing the term "honest" here. If you want contractual honesty inside a program this can already be achieved with Spec#. The reason Haskell isn't widely accepted is because saving state is someone messy, and let's face it, most programs are about interacting with the user, which is much more difficult within a functional language.

    Wizarr wrote:
    

    As far as the clarifying to my comment about compiler optimization support.  I was just thinking outloud stating, that if the compiler was smart enough, then we dont need to declarative state that we want this referenced counted, it can infer it automatically if all certian conditions of isolation are met.  Like all other optimizations, we never tell the compiler to change our code but it does so if it thinks it can guarentee the same output but faster.



    I suspect we'll have to wait for TR1 to become part of the C++ standard before anything like this happens, but certainly the compilers will (and already do) infer usage patterns from your code and replace or insert mechanisms of achieving the same result but in less memory, code or time.
  • Stephan T. Lavavej: Digging into C++ Technical Report 1 (TR1)

    Wizarr wrote:
    

    Not meaning to downplay how cool these additions sound to c++, but as a c# developer, it seriously begs the question.  Is it possible to enable a more declarative syntax to c# that allows you to create a special mode in the managed .net environment where you can bypass the garbage collection?  If you can prove to the complier that you will have deterministic destruction via these "shared pointers" that will be enforced by the runtime and reference counted I think that will lessen the burden of garbage collecting to those individual resources and speed up your code and be more memory efficient.

    Sure it's possible, but this problem is being arrived at from different perspectives. In dotNET pretty much everything lives on the heap. When you do a new object() or a new array[] construction, the item is built onto the heap, and the thing you are storing in your variable is a reference to the object. In C++ when you use a non-indirected structure (such as T value or vector<X> values) you are creating the object on the stack. This means that with C++ it is plausible for the compiler to know which objects need to be disposed as you leave scope, whereas in dotNET leaving scope is independent of the objects on the heap.

    This being said, deterministic garbage collection is possible for C#, but it is in general more expensive in terms of CPU cycles. Note that you can bypass the GC entirely by using unsafe code.

    Wizarr wrote:
    

    As a side note, I think it's an interesting idea of creating a complier to determine certain code patterns and convert them automatically to be a more efficient managed reference pointing model while providing all the benefits of a managed language.  It could be an option on your compiler optimizer.



    It's not entirely clear what you're getting at here. Please elaborate.
  • Stephan T. Lavavej: Digging into C++ Technical Report 1 (TR1)

    dcuccia wrote:
    [6:17]: "Mallocate"...I like it.


    That's standard C-speak don't-cha-know.


    Oh, and it's an awesome first-half. Let us know when you get the next half up Charles. I'm wanting another hour Tongue Out
  • Joe Duffy, Huseyin Yildiz, Daan Leijen, Stephen Toub - Parallel Extensions: Inside the Task Parallel

    staceyw wrote:
    
    Funny you say that.  During that part, I was thinking it would be cool to be able to add keywords manually using some kind of Extension method deal.  That way, people could experiment back and forth on syntax ideas or just use in their namespaces.  New keywords or overrides (with supporting libraries) could be a plug-in model to VS.  That way, c# proper stays clean, but could be extented and experimented with. 


    Although I understand where you're coming from, this idea would lead to source code becoming "locked" to a machine and IDE, which would make life very difficult when giving your source to someone else (say a colleage) or via a team collaboration unless very strict permissions of who can add new keywords were introduced.

    All in all, it sounds like the type of thing that would only be fully understood or helpful to language developers and a small number of hobbyists, but could significantly damage both the reputation and workability of C# in general, when those same hobbyists and language developers could just as easilly use a function.

    All in all I think that would be a bad move for C#.
  • Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism

    sylvan wrote:
    
    Yes, of course, but at what point should we stop saying that the set of transactional variables that has been read/written is "known at compile time"?
    If you actually have to trace down arbitrarily deep into a data structure in order to find the TVar that was actually modified, then I would say that the set of TVars can be figured out dynamically, but it's not something that can be figured out statically at compile time anymore (which was my whole point).


    No, you only look at variable sites, you don't look at items on the heap.

    While you may think that you are trying to make your point, I do have to point out again that this technology is not new or theoretical. It is currently being used in various architectures and in simmilar mechanisms, such as the CRT exception handler model for x64 processors.
  • Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism

    sylvan wrote:
    
    evildictaitor wrote:
    
    I fail to see why. All of the variable sites are known at compile time (by definition they are on the stack).

    Why could they not be on the heap?

    Because you need to keep a pointer to that object on the heap in order to use it. Or put another way, an object on the heap that is not referenced inside the transitive closure of the variables on the stack is elegiable for disposal.


    sylvan wrote:
    
    Yould read a TVar containing a filter function, and another TVar containing a list of TVars, and then filter this list based on the filter function to get a reduced list of TVars, and then modify each of them based on the value of yet another TVar, etc. etc. How could you possibly know which values have been written to when the IP points "past" this code? The number of TVars that you have modified is not statically known, the filter function is not statically known, and the list of TVars itself is not statically known.

    No, but you know that the list of TVars was read, and you have the TVars site referenced locally in the function locals list. You can see then that the TVars have been changed, and that when you want to do a mutate on the TVars object you will need to make a copy of the array (which is an array of pointers and thus not expensive) for either backup or for preemptive mutation.

    As to whether or not the filter function is known, it is clear that the filter function is known statically, since in managed languages you must define all of the code that may run over the course of the program statically prior the program execution.

    sylvan wrote:
    
    evildictaitor wrote:
    
    Again, I can't think of a good reason to spawn multiple threads within a transaction.


    I haven't said anything about spawning multiple threads within a transaction?

    In which case your mechanism of passing messages must either be considered to be a side-effect (which is another problem, but one that can also be solved) or just a linear sequential operation, which is no different to any other transaction based function call.
  • Joe Duffy, Huseyin Yildiz, Daan Leijen, Stephen Toub - Parallel Extensions: Inside the Task Parallel

    MetaGunny wrote:
    Charles,

    You brought up a great point\question that has been on my mind for awhile.

    Why not add some keyword or attribute to the language itself to further enhance parrelellism?

    I can't remember his name but he replied that they are looking into this level of language integration.  I'd love to see a video on that topic.



    Noooo! Leave C# alone! One of the reasons C# is nice and easy to learn is that it's core keyword set is quite compact. If you start adding lots of keywords here there and everywhere the language gets out of hand quickly.

    On the other hand, there's no reason why a compiler shouldn't be able to take the same keyword "for" to mean both the normal syncronous meaning when the for-body is expensive, and the Parallel.For() when the body can be easilly split into asyncronous tasks and have this as an option inside the build menu.
  • Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism

    sylvan wrote:
    
    there are variations on this idea, e.g. lookup Tim Harris' research you'll see an approach where the writes in the transactions actually write directly to the "live" objects - optimizing for successful commits, and the transation logs are only used for rolling back if the commit fails

    It would surprise me if this sped things up, since most objects are accessed via pointer (and certainly are in .NET) and thus whether you copy first and mutate on the new object or make a backup so you can rollback is merely a question of whether you swap the pointers at the point of a transactional write. But maybe there's some other benefit to this, I don't know.

    sylvan wrote:
    
    However, the suggested strategy of specifying transactional variables up front would have problems with scenarios where the transaction variables aren't known at compile time.

    I fail to see why. All of the variable sites are known at compile time (by definition they are on the stack).

    sylvan wrote:
    
    E.g. if you read transactional variables from a transactional channel, and do transactional updates on them. Since those transactional variables can come from any other thread that has access to the channel (and depend on user input or whatever), and you don't even know how many of them you'll get, there's no way of producing a static lookup of which variables have been read based on the instruction pointer.

    One of the points of transactions is they are (effectively) syncronous. That is to say that if a transactional message is passed to a function, either that function must run syncronously and return, or it must guarrantee that the source thread relinquishes control of the object and that the result of the computation is non-side-effecting.

    sylvan wrote:
    
    As far as I can see you either need a transaction log, or you disallow transactional variables as first class citizens. So I'm not sure how well the log-less idea would work in practice (it seems to me that storing transactional variables inside other transactional variables would be very common, e.g. when you send a message to another thread and also supply a message channel for that thread to send its reply).

    Again, I can't think of a good reason to spawn multiple threads within a transaction. If you start to get to that level of complexity, the effect of a rollback or commit would be effectively to cancel a large number of transactions and do many copy-backs at the point of commit (or abort), which strikes me as needing an alternative model than transactions for efficient use.