I don't see why not. I'm interested to hear the use case. One thing to note is because I can do something, doesn't mean I necessarily will. For example, to generate an ML model on the fly in a stream means you have access to basically a windowed snapshot of the data, which is likely not very much data, you could theoretically bring in the historical stores as well, but then in my opinion you are defeating the purpose of Stream Analytics.
I would generate an ML model from my historical stores first, then dynamically pull up that model from stream and compare incoming objects to that. I also do normalization of windowed objects (if necessary) in the stream. You have to architect you ML algorithm fairly intelligently to use in Stream Analytics as to update the query itself, you need to recycle the stream job. You could theoretically stand up a second job, then shut down the first. I haven't tried it, but it should work.
@Lars: wasb:// Blob storage is HDFS compliant, however I would suggest for new products to use Azure Data Lake as it is HDFS compliant, has infinite data capabilities and will support U-SQL and Azure Data Lake Analytics packages.