I'm just curios why disabling "Colors" in the "How Stuff Works" demo makes such a huge speed difference. On IE9, the demo runs 4 times faster when the "Colors" option is disabled. After all, gray is also a color.
A more general question related to these tutorials:
Are you going to have a tutorial on creating 3D games as well? All these are related only to 2D. I realize 3D game development is a huge topic but even something like an overview of which tools to use to create 3D content (levels and characters), 3D animations, pros and cons of creating your own game engine vs using an existing engine etc would be helpful.
This is very interesting for 2D purposes but does anyone know where I can find info about techniques and tools that I can use to create and animate 3D models?
My 3D engine and collision detection is essentially completed now but I'm still not sure what the best tools are for actually creating content and animations for XNA games.
I've been out of the loop for some time but I'm interested in continuing my game project. Right now I'm importing levels from Unreal Editor (I specifically need level objects in CSG format, because I use this to dynamically create portals).
Well supposedly the reason why they didn't make a WP7 version first is because right now you simply cannot do this on WP7 due to the fact that apps can't get direct access to the camera.
Great, accept they should put those resources they used to develop the iOS app on helping to fix WP7 instead. It really undermines confidence in WP7's when MS does stuff like this, and apparently they think a simple technical explanation is going to make it OK. To me it makes it worse, because now they are just highlighting the fact that WP7 has technical limitations.
Seriously, someone at MS should take control of this situation and not let stuff like this happen anymore. Maybe someone can explain to me how this is a "plus" for MS on any level, because I just can't see it.
OK, I finally had some time to isolate my pitch tracker class and create a CodePlex project. I would be interested to find out whether you can use my pitch tracker in your project, and what the results are.
From my tests my algorithm has an error of less that 0.02% over the frequency range of 55Hz to 1.5kHz. Accuracy is unaffected by amplitude, frequency or complexity of the waveform.
OK I had some time to clean up my code. I am working on creating a CodePlex project in order to publish it. I also first want to create some sort of sample app in order to demonstrate the code in use (even though you need just three lines of code to instantiate and get your first pitch results back). Hopefully I will be done with it by this weekend.
Unfortunately right now my code isn't quite fit for public release. I think there are some dependencies on parts of other code that I don't want to post and which isn't really pitch-related. For instance I have a DSP class that the pitch-correction algorithm uses, but most of that code is unrelated to it.
If I have some time available I will clean it up and post it. Most likely before the end of the weekend.
I have also implemented a pitch detection algorithm which is used to display a realtime pitch graph in a VST plugin (although VST uses an unmanaged API, my plugin is written in C# and uses reverse P/Invoke). This plugin is used as a visual guide to train a singer to sing in key (or for vocal exercises), and can also display a "grade" based on previously entered notes (or via a MIDI clip) that has to be hit throughout the song. My algorithm is loosely based on auto-correlation, but it is heavily modified to solve the following two problems with it:
Auto-correlation is slow: Since you need to do a compare of all samples with all of the shifted samples, and repeat that once for every single frequency you want to detect, this quickly becomes slow.
The way I solved this is by first down-sampling to reduce the overall number of samples to work with (with a filter to prevent anti-alias noise - also conveniently removes frequencies I don't care about), and then to have 3 passes, one each with low, medium, and high resolution.
In the first pass, I only test a total of 5 samples that are spaced out within the two sample windows. You can quickly tell if there just isn't going to be correlation if the 5 samples are all different from the shifted window's equally spaced out samples. In the second pass, there are more samples and I use the detected frequency as a starting point for the final pass, and in the last step I use more samples.
Auto-correlation is inaccurate, especially at higher frequencies: As you have clearly seen in your results, the higher the frequency, the less accurate it becomes. This is because a higher frequency waveform has fewer total samples per cycle, and you are stepping in whole sample values, so the frequency steps become courser the higher it is.
To solve this, for the 3rd pass in my algorithm (at which point the search is centered around the frequency that was detected in the second pass), I use interpolation in order to compare samples that are not limited to whole numbers. So sample 0 will be at position 0.0, sample 1 will be at position 0.674, or whatever. This allows me to space the sample steps so that it is exactly at the frequency I want to detect during that pass, as opposed to being quantized into ever-more courser frequency steps. I use a 4-point, 3rd-order Hermite interpolator.
Each high resolution pass is 1.005 times the previous frequency, so I don't use linear frequency steps. Also, once I found the two passes with the highest correlation, I interpolate between those two, so the final detected frequency is even higher resolution than the 3'rd pass' step size.
This results in a very fast and accurate pitch detection algorithm. From my tests, the accuracy is within 0.1% of the input frequency, which IIRC, is about 50 times higher than what humans can distinguish.
Are the ABXY buttons exactly the same physical size as the old controller's? I ask because I don't like the new colors and I would have no problem swapping the buttons around from an older controller (it's not too difficult to open these controllers if you know how).
You know, I was all excited about the new controller and went to Best Buy to get two. Except they had no clue what I was talking about. Then I called Gamestop and they also didn't have a clue what I was talking about. "Huh? For what console did you say this was...?"
So as a follow up comment to the GC related discussion, Shawn mentioned that they were all initially "skeptical" of using managed code for game development. I have a feeling that the XNA team was a great source of motivation to optimize .Net in ways that makes sense for "realtime" applications like a games. So I'd be interested to know what influence the XNA team had with the .Net team in this respect. I mean, they must communicate, right?
Shawn also basically mentioned that when using XNA, you are still required to have a deep understanding of the memory management in .Net (EDIT: For instance, did you know that calling properties that implement an interface will automatically box any value types? I didn't, and how are we supposed to know this?). This is what my question was related to, and how we basically give up a huge part of the advantage of using a managed language. This problem can be solved almost completely by moving to an incremental/concurrent GC. With this, you no longer have to sweat the small things, and some amount of garbage per frame would be completely acceptable, even if your heap is complicated (almost a given in a complex game).
I understand this is complicated, and we won't see something like this soon. But what is being done in this regard, and are they even talking about evolving the GC in this direction, even if some time off into future? I think it is worth mentioning once again that Java has an optional low-latency incremental GC, so it is not impossible to do this.