To the point: I really REALLY like the Lightweight Beta edition of the MSDN C# & F# Reference
This Michael Howard guy's emphasis on security as a core academic subject to be studies in universities WORLD-wide is 100% true and crucial for the current day, but I'd say it's a bit easier to get it in Universities than having a hero do the dirty-work. These days universities rarely care of the future research which might actually solve the problems, and instead focus ALL funding on workforce education & training instead of the R&D which I only wish I could experience now. All I get are C#, Java, Algorithms, Data-flow etc..... So its basically your job to tell the universities you require the skills so they will provide. It's not justified to me but it would work since they are led astray by the "economical" requirements you want them to train their students for career success as placeholder positions.
I'd be interested to hear otherwise from other peoples comments and academic experiences, they would be lucky to have such formal training instead of my self-guided learning curriculum of interests.
Concerning the possible Lectures on C9, I'm already a functional programmer, so I skim the Functional programming videos lightly. I would on the other hand really appreciate and enjoy a security "experts" take on what to watch out for like common pitfalls and caveats with code vulnerabilities as a little series going over core secure data structures or constructs that I don't really need to worry about coming from the Haskell world that would apply to my current learning of C# (with Dev10 Beta2 of course) in my university classes right now.
On a side note, my first test run of MiniFuzz showed no crashes in the log of my Assignment#4 for university, so far so good
This security model reminds me a lot like how monads control side effects at a language level for I/O and managing complexity. I watched this video once more just to verify with Haskell's type system model as the parallel, and I found this to be ~80% mappable to the monad model. Maybe an opt-in plug-able type system for the CLR/DLR for security that includes these sort of checks based on Haskell's unmatchable type system may be a fun project to explore
Oct 08, 2009 at 9:24 AM
One amazing add-on for GHC is a stand-in frontend sort of a replacement for GHCi (using GHC officially) is a project called WinGHCi found http://code.google.com/p/winghci/ I fully recommend its usage since I haven't been disappointed by it yet (and it prints to screen ~20x faster than the original ghci output)
I think that a nice feature specific for Chan9 videos, would be an account specific perk to remember the play position of an incompletely viewed video, so whenever you leave and come back, so you can continue from the position in the video you were watching.
He's indeed an oddball, all his 7 mentioned myths are themselves... myths! Since he just picked the truths apart into partial truths to try and make a sales pitch. For the record tho, I LOVE the cloud, but how he is bending words for his benefit is against what I believe.
- Azure is what? Architecture+Infrastructure = Platform
- Damn right every major vendor will have their own "Cloud", Oracle, Microsoft, Dell, Google, etc; will all have their own clouds based on fear of Intellectual Property and trade secrets or source code leaking into another vendors hands.
- SaaS IS a cloud app, its an application which is run on many computers at once, applications easily fit into a SaaS topology like seeing 1 element of a naive array as one physical machine, thus executing a closure upon each machine for almost balanced workloads.
- Clusters and movie production render farms are totally different, not even Map-Reduce qualifies as cloud computing to me, its far too limited in scope to be as general purpose as what historical grids have provided, like folding@home for example.
- Well not like you really have a cloud on 1 machine, thats just being stuck in a historical rut of sequential code being run on 1 core, and preventing multi-threaded bugs to be seen on a single core for the same reason. The coolest and only way to achieve local cloud computing is having 1 VM bind to 1 core on a Many-Core of the future with hardware supported message passing to one another on the motherboard to create a local Virtual Grid.
- The Cloud is 100% dependent on the internet for both transportation mechanisms and encapsulation methods, so he just got his thoughts reversed....
- This one is the most radical, and its almost not a myth on 1 merit, the ONLY thing that cannot be in the cloud, is what you use to access it, the bootstrap mechanism(point of entry). Once that is not the case, say an embedded OS, or a mobile device, THEN ALL DATA will be on the cloud, it would free up every company, every person for flexible confidentiality of their data. With accessibility from any location on Earth as long as they are bootstrapped to access the Cloud(Internet). So just omitting the method of accessing the cloud, to say it isn't included as "Everything", then yes, EVERYTHING will be in the cloud.
Just my thoughts for now, time to go to another vid as I wait for PDC beauty.
Sep 19, 2009 at 11:12 PM
Charles and Channel 9'ers I actually believe I found that magic sauce/Silver Bullet/Free Lunch for concurrency & parallelism for running sequential programs in a concurrent manner on a multi-core chip.
In no relation to relativity, I call it Relativistic Computation, I'm in school so time is limited. But for relativistic computation I just mean when code gets pumped into a scheduler, in a sequential way, its functions are put into a CPU opcode pipeline, and when the function hits and goes over all the cores, flowing over to start on the first core again, it reassess the function to execute based on its progress over the available array of cores.
A data dependent Increment to 40 showing what number goes where during each step to look like:
Step #: 01 - 02 - 03 - 04 - 05 - 06 - 07 - 08 - 09 - 10 ————————————————————————————————————————————————— Core 1: 01 - 05 - 09 - 13 - 17 - 21 - 25 - 29 - 33 - 37 Core 2: 02 - 06 - 10 - 14 - 18 - 22 - 26 - 30 - 34 - 38 Core 3: 03 - 07 - 11 - 15 - 19 - 23 - 27 - 31 - 35 - 39 Core 4: 04 - 08 - 12 - 16 - 20 - 24 - 28 - 32 - 36 - 40
Even though it looks plainly obvious it has never been implemented!! Just have an input and output scheduler pump dependent opcodes sequentially into separate CPU pipelines to saturate the separate core pipelines.
Basically, it's striping a STREAM of assembly opcodes into the CPU's after the scheduler has a quick data flow analysis for it to make the recomputed true op/function running be +4 per core instead of the original +1 for each core in that example to show the concept at work.
The scheduler being huge in controlling data access to and from both input/outputs of the data streams to help control conditional execution and data dependancies; so having 1 scheduler to control 1 chip and its cores in a User-Mode Scheduler(UMS) or a VM abstraction for memory isolation would be quite awesome.
Id really enjoy a respectable Microsoft Technical Fellow to see if this is even sensible but it seems to work in my head
This is my first post here but I am a long time channel 9'er and avid Haskell enthusiast thus it's nice seeing Erik being challenged by such a guy like Mr. Lampson.
Until next time Charles