Singularity Revisited

keeron wrote:Would love to see people talk about real-world usage and places where it makes more sense to use this than to use traditional system calls (given the cost of using transactions).
What is the difference between a transactional file system and a journaling file system, if any?
I'm pretty sure Verma said you can still use traditional file operations when you don't need transactional guarantees; both are supported. However, any changes that are committed by another transactional action will show up for any app using non-transactional operations on the same file. So, use transactional operations if you need a guaranteed state either when reading or writing, otherwise, using traditional file operations are your best for less overhead. Or you can just do what the pros do, and wave a magnet over the platters in the correct sequenceph wrote:You talk about the performance cost of transactions...for applications that do not need transactions (not all applications do), can an application not use the transaction features and recoup the performance hit?
Surendra,
We almost met in Sao Paulo on Friday. I recognized your face and planned to ask you a few questions about TxF, but then we didn't meet
1. If you compare the isolation level that TxF provides with the typical isolation level of databases, which would be the best match? Snapshot by any chance?
2. I read that the TxF API changed a lot in RC1. For what I have seen (superficially), it looks a lot less transparent than before. I may be wrong, but it seems that before you could just create an ambient transaction and begin working with files as usual, and now you have to explicitly invoke transactional versions of the file systems APIs to get transactional behavior. Am I right? Was this design change implemented on purpose or was it because of a roadblock that you had to sacrifice the original simplicity?
3. I would like to understand better the role of KTM and DTC. Does it work similar to promotable transactions in ADO.NET? Is it correct to think of KTM as some kind of lightweight transaction manager that will only delegate to DTC when it needs to coordinate with resource managers of other kinds?
4. Do you think DTC is too expensive nowadays performance-wise for what it does?
Thank you. And it was nice to see a Channel 9 celebrity with my own eyes
SurendraVerma wrote:Hi Diego,
Sorry I missed you in Sao Paulo. I really enjoyed Brazil and Sao Paulo in particular.
You asked some really good questions. Let me try to address them:
...
I did not see your answer until Saturday. Thanks for being so kind to give me your address. Unfortunatelly, I am not sure if you are receiving my messages because I am not getting any delivery confirmation. I just hope you are busy celebrating Vista shippment
Well, my email was actually a follow up to this thread, so here it is:
Sorry for trying to understand TxF and TxR from a database/managed developer point of view, but this is where I am coming from after all
Regarding the isolation level you describe, I guess if I had to explain TxF/TxR isolation level to a database geek, I would just tell him: “it’s just like REPEATABLE READS”.
Regarding the change in the APIs, I understand why you had to switch to an explicit model; however I grieve over what you had to give up.
While the problem with the implicit model is that you cannot opt-out of the ambient transaction, with the explicit model you cannot easily get transactional behavior through the “upper” programming stack without rewriting a good share of it.
I am sure that alternatives were carefully considered. However, if it is not abusing your tme, may I ask you about the merits and flaws of an alternate approach that I am thinking of? For sure I am not the only one that thought about this:
1. The original version of the API supports an ambient transaction and you mentioned that you could still expose that transaction handle at the thread level. I am not sure where things landed in the final Vista build.
2. Apparently, to work with any of the managed or unmanaged APIs that could take advantage of transactional semantics, you have to call a function with a file name (or key name for TxR) as a parameter at some point, and in case you are talking to one of the higher level layers, those will ultimately invoke one of a small set of Win32 name-based APIs.
So, what if you could define a “transactional” moniker that you would need to preppend to the file name or registry key parameter in order to get “implicit” transactional semantics? I can imagine something like “txfile:”, but it could be something else.
I think this would have a few significant effects:
1. The non-transactional semantics of existing code would be preserved even in the presence of an ambient transaction, because hardcoded and existing strings would be lacking the moniker.
2. You would get back the possibility of using transactions with any existing API that takes a file name (or a key name) from Win32 through the .NET Framework, requiring minimal changes.
3. If you included something like this in a future version of Windows, you could still mix explicit, Vista-like calls, with implicit, moniker based calls enlisted in the same transaction.
I understand that this is not completely a clean approach and maybe it there is even something naïve about it. But in any case, it would be nice to learn what you think the tradeoffs are.
Thank you for listening! And congratulations for shipping this cool technology in Vista! It is really something I wish I have had a lot of times.
Diego
I have an application which archives data of a file out to the cloud. Which 1. renders the file offline and 2. leaves a 4K stub on the local file system. In an XP file system modifing properties does not invoke a file recall (done by reading any part of the data). With the transactional changes the file is recalled when view system properties. 2 questions: 1. is viewing system properties of a file read data? 2. Is there a way to disable the trancational model in Vista or Win7?
Thanks