ARCast.net - Scenario Based Architecture Validation

Sign in to queue

Description

How many times have you been through an architecture validation exercise? My guess is not many people have ever done this. Most of the time we judge the architecture late in the project when we decided if large amounts of re-work must be done to make things fit together. I’m sure you would all agree that doing an architecture validation review is a great idea, but how? My guest today is Dragos Manolescu one the architects from the patterns & practices team here at Microsoft with some terrific insights on Scenario Based Architecture Validation.

Digg!


Links

-Ron

The Discussion

  • User profile image
    bbryant
    What a great podcast, explaining an architecture is hard enough but being able to review one effectively must be an art. I blogged the other day about my view on documenting architectures that seemed to align very nicely with Dragos scenario based approach to reviewing an architecture.

    Read my post at:

    http://blairbryant.wordpress.com/2007/03/28/document-the-journey-as-well-as-the-destination/

    Blair Wink
  • User profile image
    rojacobs
    Dragos Manolescu: If you look at how we are building applications today, they are very different than what we did 10 years, 20 years ago. Twenty years ago we were very happy if the application just ran the way we wanted it to run.
    Ron Jacobs: That's right. I remember those days, 20 years ago, that was right when I was starting in my career in this industry. I have seen a lot of things come and ago but today, today is a first. That's right ladies and gentlemen this is the fastest turnaround ever of any ARCast episode from the time it was recorded to the time it goes live on the air.

    Why? Because this is such a killer topic. Today we are talking about architecture validation. How do you validate an architecture? How do you look at it and say will it work, won't it work and what are the ways you can get that done. It's important to do but I'm not sure we know how to do it.

    We have a great guest today to help us. Let's welcome Dragos Manolescu.

    [applause]
    Ron: Hey this is Ron Jacobs and welcome back to our ARCast. I am here today in my office in Redmond where I am joined by Dragos. You know it just occurred to me that I do not know how to say your last name.
    Dragos: My last name is very phonetic, it's Manolescu.
    Ron: [laughs] Okay, so where are you from?
    Dragos: I am originally from Romania. All names ending in e-s-c-u are Romanian. I've been in the US since '95. I studied at the University of Illinois with Rob Johnson in the Illinois Software Architecture Group. Then I worked for a series of consulting companies and for the University of Kansas. I wound up at Microsoft about ten months ago.
    Ron: Wow, that's a long journey. [laughs]
    Dragos: That is a long journey.
    Ron: Today you are part of the patterns and practices group here at Microsoft.
    Dragos: That's right. I worked as a contractor with patterns and practices back in 2003 and 2004. I worked on the "Integration Patterns" book. I am one of the co-authors. I liked the group since then. I like the work that they were doing and the types of projects that they were tackling at the time as well as their affinity to patterns.

    I have been a member of the patterns community since 1996. I ran the '99 patterns conference in Illinois and I wrote a few papers and chapters and I edited patterns books since then. The proximity to a group that deals with patterns and has patterns in their name was irresistible so I had to join.
    Ron: [laughs] I love patterns too. I am a big patterns guy and believe in that stuff. Today we are going to talk about something that's maybe not as glamorous to some. It feels like kind of a process, you know, and I know a lot of people get a feeling like, yuck, process, we have to do it, but.

    I want to talk about architecture evaluation. This is difficult because it so hard to pin down. If somebody says hey would you evaluate my architecture. Man, I begin to wonder well how would you even big to do that and what would you do.

    You have been thinking about this for a while. Tell me about the landscape, first off. What are some of the ways people evaluate architectures?
    Dragos: Let me take one more step back and talk a little about why architecture evaluation is becoming increasingly important. If you look at how we are building applications today, they are very different than what we did 10 years, 20 years ago. Twenty years ago we were very happy if the application just ran the way we wanted it to run.

    These days that is no longer sufficient. Not only are you concerned about whether the application runs but you are also concerned about how it runs. Does it scale? Is it secure? Can it recover from failures, and so on. All these are architecture qualities. Architecture qualities of the application that you are building and they are becoming increasingly important with applications such as the ones that we are used to these days with web services and distributed systems and networks in between and so on and so forth.
    Ron: Don't you think also though that there are qualities of the architecture that are not necessarily functional, and they are not even non-functional, but they are qualities like -- the code is easy to understand and follow, that it could be extended or updated easily in the future and that it is not a complete mess and things like that. The architectural design could have impact on these attributes as well.
    Dragos: Architecture is concerned with how you partition the functionality in an application. This partitioning is crucial when you extend the application or want to modify a particular piece.

    It helps you localize, if it is properly designed, it helps you localize change so different teams can work in parallel on different features without stepping over each other's toes. Architecture determines the ability of working on an application and extending the application and keeping the code tidy as the application evolves in time and keeps acquiring new features.
    Ron: The other interesting thing in some of the writing you pointed me to before this, is you mentioned that there is a lot of value in doing an architecture evaluation.

    People are saving money. They are preventing big disasters early on. Is this something you do late in the project, early in the project, what time does this get done?
    Dragos: There are different times when we can run an architecture evaluation. To get back to the numbers first, there is a study that was published in IEEE Software, I believe in 2005, in the April/May issue. The study comes from people from ATT, Lucent, Avaya, and so on.

    Their estimated savings -- they looked at about 700 architecture evaluations -- and they averaged the savings that they had incurred. Their estimated savings are around one million dollars per 100,000 lines of commented code. These are the numbers if you are interested in the numbers.
    Ron: [laughs] Wow, that is astonishing.
    Dragos: Back to your question. Ideally you'd like to run an architecture evaluation once some of the key design decisions have been made but as not too late so you can go back and change them should the architecture evaluation point out that these decisions are not the right ones. Should point out that maybe you made some assumptions when you made these decisions and those assumptions do not hold. Or maybe there are better solutions out there.

    The mistake here is that sometimes you start doing an architecture evaluation to realize that you cannot really change any of these critical things. That it is too late. Other times people want you to run an architecture evaluation before they have made any of these significant decisions, before they've decided exactly how they are going to build this thing. That's too early.

    There's a sweet spot here. You don't want to be too early because they haven't decided what they are going build and how they are going to partition the functionality, how the components will interplay with each other, how they all communicate and so on. Or, too late when they've already made these decisions it's very hard to go back and change them so they are going to go their merry way regardless of what you tell them.
    Ron: The other interesting thing that occurs to me and the issue of timing is how this is done with agile projects. As they are going around each iteration, do you try to do an evaluation around each iteration? Maybe it's a lightweight process, just takes an afternoon or something, but you say at the beginning of an iteration they say well these are the user stories we are going to attack this iteration, we think we are going to do it this way, they kind of have a general idea of the architecture and how they are going to do it. Is that when you would do it?
    Dragos: That's a common concern that people have regarding architecture evaluation and how it fits with agile development. In my mind, the two complement each other very nicely. Before you sit down and start developing an iteration, it is good to have a sketch on a whiteboard that shows you the approximate components of the architecture. And this is when you want to run an architecture evaluation.

    One of the side effects of the evaluation, besides just validating your design decisions, is that you're going to pressure test the requirements for example. And if the requirements that you have are not sufficient for architecture evaluation, they sure enough are not sufficient for having your development team sit down and implement stories and write code.

    So the two work hand in hand together, the architecture evaluation is a filter that helps you decide when you're ready to move on to the development team and have them start working on cards. It employs a lot of principles and techniques that you see parallels with agile development.

    For example by running these architecture evaluation scenarios that you develop they can respond to test in a very TDD fashion. By having the different stakeholders of the architecture involved in the architecture evaluation they parallel you're having the customer on site for agile development.

    And by having refactoring, you know, with architecture evaluation you run the architecture through a custom evaluation criteria, you see how it responds or whether it responds as you imagined it to. Then maybe you go back and redesign some pieces, that's response refactoring.

    So the two are very similar but they're a different scale. At a larger scale you're concerned about the different components. Design decisions such as, well, I'm going to use a bus or broker, things that you need to decide upfront. Now what's exactly in these boxes you won't decide up front. This is what the agile development process is going to tackle.
    Ron Jacobs: OK, all right. So, all right, we talked a little bit about the timing and so forth, but I'm thinking about, you know, if I were to say, OK, we're going to do an architecture evaluation. I've got to get the team ready so I want to make sure that we've got enough of the architecture in place so that we can evaluate it. And then really the goal, what is the goal of the architecture evaluation?
    Dragos: There are two goals. The key goal is to validate design decisions that have been made with the available information to make sure that they help meet the stakeholder's expectations about the architecture.

    And when I say the stakeholder's I mean the architecture team, the developers who will build the system, the developers who will maintain the system, the users, the testers, any other stakeholder's. The one who's paying for the system to be built, and so on.

    So this would be the direct objective, validating the design decisions that have been made to meet all these stakeholder's concerns. And the other side benefit of this is social. You will bring all the stakeholder's to the same table, around the same table, and help them identify issues that maybe they knew that they existed but they never took the time to discuss before as a group.

    And it's quite amazing what you see when you get all these folks around the table and issues like well how does this fit with our product's strategy or what does it do to a user experience bubble up. So it's an interesting experience, this is more like a side effect, a social side effect to architecture evaluation.

    David: So really it sounds like before you go into this evaluation you really got to have, to use the colloquialism, you've got to have your ducks in a row. You've got to be prepared to answer a lot of questions about how is going to work, how is that going to work.

    So you might want to even have sort of a little evaluation within the team itself before you drag all the stakeholder's in. Maybe get your architecture team or your dev team together and say let's just run through this ourselves, make sure we all feel good about this, are we ready?

    Otherwise it seems like you could go in there and run into a lot of unanswered questions, a lot of scratching of heads and going, oh, I don't know what we're going to do there.
    Dragos: That's right, there's a series of prerequisites that must be met before you can engage in an architecture evaluation. One of them revolves around the critical design decisions that I've already mentioned, so these have to be made and documented somewhere so the evaluation team has a set of documents, a set of diagrams to look at to begin the evaluation. The other one is to identify who the stakeholder's are and in my experience, though sometimes people have an approximate idea who the stakeholder's are, they're list is almost always incomplete.

    People who will be maintaining the application typically are not considered when architectures get designed and put on the white boards. Sometimes important class of users are not considered and so on.

    So that's another prerequisite, a critical prerequisite that needs to be satisfied before you can engage in an architecture evaluation. And the way I mitigate that when I talk to people about architecture evaluation, I walk them through the process and I give them a bullet list, these are some of the things I expect to have in place when I get here so I could hit the ground running and go with the architecture evaluation.

    If these prerequisites are not met then I cannot help you with the architecture evaluation. I will have a biased or an incomplete view of what you have designed.
    Ron: So it almost sounds like you've served in the role of kind of a facilitator to come in and help teams do these architecture evaluations in the past.
    Dragos: That's right, prior to joining Microsoft I worked for Thoughtworks. In Thoughtworks I have developed and led their architecture evaluation practice. And in playing that role I worked with several Fortune 500 companies, evaluating their enterprise architectures.

    So I've seen some of these obstacles, I had to deal with people who are not believers, who needed convincing that they needed to get into it. I've dealt with a politics that typically wait for you at the other end of the tunnel when you get out from the evaluation you have a message that may be painful to deliver to some involved who have, you know, spent money and time and resources on building something that doesn't quite meet the expectations of the stakeholder's.
    Ron: So OK, you've got all the stakeholder's together, you've got some critical design decisions made and you said the goal is to validate these design decisions. Now this is the tricky part right?

    Because I can just imagine, you know, everybody sitting in a room and going well how would this work, and that'll never work and they're arguing back and forth about whether this'll work or won't it work. How do you get through to validating a design decision?
    Dragos: So let me explain a little bit here, because I got to the details without framing the problem's base. We're talking about scenario based architecture evaluations where we use evaluation scenarios as the vehicle for measuring how does the architecture support important architecture goals such as performance, flexibility, security, maintainability, conceptual integrity and so on.

    There are other methods for evaluating architectures, and I don't want to spent too much time talking about those because they are not as generally applicable as scenario based evaluation. But for the record, one method involves simulation.

    And this is typically restricted to problem domains that have been studied for a long time and we have a simulator that we can run to determine how the architecture is going to perform.

    So think of your cell phone software, or the software that controls you ABS system in your car. These are systems that can be simulated easily and you can run the architecture through a simulation to see how it reacts, and they give you very precise results.

    Then another class of evaluations employs checklists. Checklists are also suitable for classes of systems. They're more generic then the simulations and they cover more additional quality attributes than a simulation can cover.

    Simulation typically could cover things like security where you employ thread modeling or performance where you employ queuing theory. But simulation is not going to tell you a whole lot about maintainability and learnability and all these other -ilities.

    Well checklists will tell you additional information about these architecture goals, though the precision is not as high as with what you get from a simulator. And then the most generic class of architecture evaluation methods are the ones that employ scenarios, evaluation scenarios as the custom criteria for deciding whether architecture meets its goals or not. And this is the class that I've employed because it can use these methods for a wide range of architectures.

    Now the caveat is, so this is the pro, so the con of scenario based architecture methods is that you want to get a very precise answer. The outcomes of an architecture evaluation that employ scenarios are typically back of the envelope calculations. Yes this is going to work, this is not going to work.

    So back to your question now, once you have these evaluation scenarios, the evaluation team typically sits with the architecture team in the same room and they ask them to explain how does the architecture respond? What components are involved in implementing this particular scenario? How do they communicate? What information do they exchange? What assumptions have been made when the design was put forth? What architecture strategies have the architects employed to meet particular goals, such as performance, or security and so on?

    And based on the architecture team's answers, the evaluation team gets a better handle of how much thought has been put towards meeting these goals and whether there are any unanticipated consequences. Any conflicts between the various architecture strategies that the architecture team has employed.
    Ron: Ok, so you mentioned something there that got me thinking. So the scenarios are derived from the goals for the architecture. So you'd have to have a really clear understanding of the goals for the architecture. I know you worked on the web service software factory, so I was just thinking about this because you could say "well one of the goals for the web service software factory was to ensure a smooth transition for people from Azamex based web services development to WCF based web services development". So that's pretty clear goal for the architecture.

    And so then you could say OK a scenario is I'm building a project, we're starting with Azamex, we expect that in V2 of the project we're going to migrate the whole thing to WCF, so that's the scenario. And then the evaluation teams says OK, tell us how that would work. And then the architects and the dev team are going to walk through "OK this is how it's going to work and we made this decision because it supports this goal in this way". Is that what you're talking about?
    Ron: That's right. So you'd take a scenario like this and a very important thing about a scenario would be to quantify the response. So if you say I want to support migrating from a particular technology to another one, how do you measure that? Well from the perspective from an evaluator, I need to know what am I looking for? How do I quantify these things? In this example, I could say that maybe the quantification of the scenario, the response measure, would be that I don't want the work that's required to pour the service built with one version of the factory to another version of the factory, I don't want it to exceed two man weeks, for example.
    Ron: [agrees]
    Ron: And then you would go and ask the architects of the factory, or whatever system you're evaluating, you'd ask them what architecture strategies you've employed to meet this goal. For example, one of the things that they may bring up would be lead binding. We're employing lead binding here instead of hard coding these components, or technology, or configuration. We have put it out in a configuration file and we read this configuration file at run time rather than compile it together with the code. So that way we can change the components that we wire together, for example maybe we're doing some dependency injection or things like that. We can change these components without touching the code.

    So we can accommodate different configurations. We can accommodate different technologies. We can accommodate different line ups of what components are in the architecture at run time. So this would be an example of an architecture strategy.
    Ron: Ok, so it sounds like when you talk about the evaluation team, I'm just kind of picturing maybe you bring in two or three architects who are not part of this project who are going to be skeptical and who are going to insist on the detail. Somebody who would understand the architecture, so if you were the evaluation guy, I have to convince you that the design decisions I made are going to support that goal. I'm saying that because we talked about the stakeholder's. I'm imagining, if this were an enterprise project and we've got one of the guys from the business unit, he's not a technical guy at all, is he's sitting there in this meeting with his eyes glazing over going "I don't understand a thing these guys are talking about." Would you want that kind of person in this meeting?
    Ron: Absolutely. Typically the architecture evaluation team comprises some seasoned architects who understand the typical architecture strategies that architects use to achieve these goals and you also want to supplement this team with domain experts. For example, if I'm evaluating a system for banking, for investment banking, I would like to have in the room someone who understands investment banking better than I do so they could ask questions from that angle. If I'm evaluating a system that deals with claims processing for an insurance company, I'd like to have someone who understands claims processing better than I do so they could ask questions from that angle.

    So typically besides the things that an architect who is not specialized in claims processing or investment banking is asking, there is a set of other questions that folks who have worked in this space for a while can ask off the top of their heads because they see these problems over and over again, certain patterns, if you will. Things that, if you're building an investment banking system, this is a set of questions that you must be ready to answer because these are concrete examples that the architecture must be able to react to. So make sure you have an expert on hand to help you supplement the knowledge that the evaluation team brings to the table with additional domain specific knowledge.
    Ron: It occurs to me then, that there may be also people who have experience in other elements of the infrastructure, lets say. So maybe if we ran a large enterprise organization you might have somebody who really understands the security elements of the enterprise infrastructure and they could validate the decisions you made around security. Or you've got someone who really understands the mainframe and the way you interact with the mainframe and the kind of services it can provide and they validate your plans around that, so you might have to have those kind of people as well.
    Ron: Absolutely. So here are some other examples, particularly in heavily regulated industries like finance and insurance, you typically want to bring in standards experts because there are standards out there that the typical financial system and the insurance system has to comply with. You may also want to bring in an expert who can talk to you and could shed light over how a particular product has been employed.

    Lets say you're using an integration broker or an ETL tool. You want to have someone who is experienced with those tools, those components and help you craft and articulate scenarios, architecture validation scenarios, that exhibit or showcase the critical aspects of the components that are in the architecture. There are other places like this, but standards in product specific areas I think there are important trades that we see an increasing number of enterprise systems due to their size and due to the fact that the business environment is standardizing quite quickly.
    Ron: It seems like the kind of thing that if this were a larger organization you would want to establish some kind of rules or process or policy or something around in which we put evaluations together. Here's people who are trained on how to participate in architecture evaluations, people who know how to facilitate them, so you could pull together a team and it's not like you're having to learn this all over every time.
    Ron: That's right. Ideally you'd have a special group that specializes in architecture evaluation and this group would get augmented each time they run an evaluation with people who are not part of the standard evaluation group but people who just play the role of architect in other projects. So that way they could learn by going through the motions and get better at designing at what they do.

    Back to something that you said earlier and I forgot to mention that when you said it. In an ideal world, the architects would go and ask the evaluation team please run an evaluation on my architecture. I have hit this critical milestone. I've made some decisions. I want a fresh pair of eyes, or pairs of eyes, to have a look at my design decisions and validate them. In an ideal world, the architect would regard the evaluation team as their friend - someone who would help them succeed - not someone who is looking for holes and misunderstandings.

    It is the same relationship between a developer and a tester. The tester is not there to prove the developer wrong. The tester is there to help the developer meet their goals. The same with the architecture evaluation team is there to empower the architect to make him successful at the end of the day that their architecture, their design succeeds, doesn't fail.
    Ron: That's a good point. It just made me think about the concept of pair programming is sort of like this in that you got two developers sitting down there working over the same problem and the idea that there's two minds thinking about this problem yields better result. Even though you got these two people focused on one thing it is still better than having them focused on different things, qualitatively better. It made me think about, I wonder if this evaluation maybe points towards the need for a concept maybe we ought to have instead of just pair programming maybe we should have pair architecting as well.

    Maybe you have a couple of architects kind of collaborating together. I am not sure that you would look at the evaluation team as being part of the pair architect as they are like just coming in for one shot to evaluate it. But maybe if you had, like off at LC projects where there is a single architect, and this is all on them. I thought well maybe if we took somebody from this evaluation group and said you are going to be kind of the pair architect for this person. You are coming in much more before the evaluation. So you could help them to know if their project is ready for the evaluation. You could help them to know about what areas need more detail, more critical thinking around it. You can kind of help them through that. What do you think?

    Man 2: I think that is great model. One of the side effects of architecture evaluation is that at the end of the architecture evaluation, the documentation about the architecture is much better, the requirements are much better understood, and some of the constraints and assumptions have been pressure tested.

    I have quite a few examples that come to mind right now where I could think of what is that we got at the beginning of the evaluation and how did these artifacts look like at the end of the evaluation? In an ideal world you'd see no change because the details and all the t's would be crossed and all the i's would be dotted when the architecture team commissions the architecture evaluation. In the real world they are not as meticulous and maybe some things slip through the cracks. There is quite a delta between what is it we started with and what is it we have at the end of the evaluation. I think if you'd have someone supplementing or playing a different role on the architecture team, it would help the architecture team get these artifacts and the prerequisites ready. Then this delta between the end and the beginning would be much smaller than it is.
    Ron: Yeah that also might give you a good metric for evaluating architects which is something I often get asked. IT managers want to know, well we've got these architects, and I've got to review them for performance. And how do I evaluate the work of an architect?" And looking at how their architectures comes through evaluations, and you know how prepared they were, how valid their decisions were, might give you some good indicators about the strengths and weaknesses of the architects who are serving in the organization.
    Dragos: It may, but I would stay away from that area. Simply, because sometimes maybe the architect doesn't have access to particular resources that have been made available for the architectural evaluation, and they would not have considered that otherwise. So, I wouldn't put architectural evaluation as a means for evaluating an architect. It's just a means of validating the decision that they've made. It may be discovering additional artifacts, additional facts that they haven't had access to.

    And it happens you know. It's happened to me in several occasions that the architect hasn't had access, particularly when you have a large project that span across different sites, people in different time zones. And maybe they didn't pay attention to the architect when the architect was designing this thing. But, now that the management has commissioned an architectural evaluation, then everybody who's going to be involved in it is going to sit at the same table.

    So, I wouldn't recommend using the evaluation of the architecture to evaluate how an architect has done on the particular project. There are different - we're looking at different subjects here.
    Ron: Oh! OK, all right. So, but you know that's a good point, because the very fact that you put this evaluation in place, will cause facts to come to light that wouldn't be revealed otherwise. Because now you know you've got everybody talking, and like you said, "I like the picture of pressure testing, " you know.

    I think that reminds me of when I was in high school. I took the Auto Shop class you know. We learned about engines and all that. And of course one of the things you'll do to an engine, especially if you take any of the compression parts apart, the heads, the cylinders, whatever, you'll pressure test it to see if there's leaks. And if it's not getting good compression, you're going to have a problem. So, you're injecting pressure into the engine to see where is it leaking, right? And that's a great visual picture for this.

    You're looking at this architecture, and you know sometimes all you have is this white board with drawings of boxes, and lines, and stuff. And you go, "I guess that will work." But when you start asking, getting more specific about one, this scenario when you pressure it - where is that thing going to leak?
    Dragos: So, here's some examples of what typically happens. Sometimes you start with you know a box and line diagram, and this is the architecture that they have. And you start mapping a particular scenario, an evaluation scenario in the architecture. And it's quite typical that a scenario boils down to one component, but you don't know exactly what's happening in that component.

    So, that's a clear signal that, that area of the architecture has been under-documented - it is not well understood. If people boil down the entire scenario, or a large fraction of the scenario, to one box on the diagram, that's a signal that you need to spend more time figuring out how you will design that component - what's in that component? What role does it play, and how does it interact with other components?

    And other times you may have scenarios that do not map at all into the architecture. A scenario may reveal that a significant chunk of the architecture is missing. People have not thought about this yet, and this is a scenario, an evaluation scenario that brought up a missing piece. So, there's value in that as well.

    And other times the evaluation scenarios may help you understand what the limits of the architecture are. So, when you think of evaluation scenarios, they are not things that you would use to build the system. Their only purpose is evaluations. So, they differ from interim scripts, or scenarios that you would hand over to a developer, in that they may involve things that you don't expect the - you don't expect to do with the system, you don't expect the system to act on.

    Yet, they would shed interesting light over - how does the architecture perform under stress, or under circumstances that you don't really anticipate? So, think of those car and driver reviews. Test drives, right? They told you, "Well, we drove this car on the highway, we drove it in the city, we drove it off-road." That does not necessarily mean that you'll do all this driving when you buy a car. Yet, you're still interested to see how the particular car that you're interested in acts when you drive it off road. Maybe you're never imagining driving your car off road, but you want to see you know - how does the suspension act? How does the engine pull? - and so on.
    Ron: So, it seems to me for this scenario based evaluation to work well - getting the scenarios right, and complete, is crucial. Right? So, you know if you just come in and say, "Well, there's three scenarios. We did all three, they look good."

    And the evaluation teams says, "Hey, wait a minute. None of your scenarios deal with security, and none of them are really dealing with the scalability of the system. We can't evaluate this, because there's not enough scenarios to make a complete judgment." So, who writes the scenarios? Do you kind of make a pass over them to say - yeah, these are complete enough to evaluate it?
    Dragos: Scenarios are a critical component of scenario based architecture evaluation. Without good scenarios, your evaluating criteria will be off, and then therefore the outcome of the evaluation will be off.

    So, in an evaluation that I just finished with one of the groups here, I have started with the stake holders that we identified. So, this is starting back from our previous conversation on the prerequisites - start with the stake holders. And we started with a brainstorming session, with all the stake holders around the table. We brainstorm things that we envision doing with the architecture when it ships, and maybe things that we envision doing with the architecture in v1, or v2, or v3 - vNext so to speak, right?

    And as an evaluator my job is not only to help people brainstorm these scenarios, but also to put on a table any areas that I, based on my experience, think that they're important, but maybe they have escaped evaluation.

    So, let's say I'm sitting in this brainstorming session, and they keep throwing scenarios on the table. And at some point I realize it, "Well, here's a scenario they haven't covered it all." And I tell them, "What about this scenario? What about a scenario like this when I evaluated an architecture for a similar system, this was an important deal for them. Is that a big deal for you, or you don't care about it?" And typically that's enough of a trigger to trigger conversation, and other scenarios to emerge in that particular area that gotten elected.

    So, you see now how all the pieces fit together with the stake holders, who represent all these various interests - the developers, the users, the person paying for your architecture, the testers, and so on. As well as, maybe the experts that I have brought on board for the architecture evaluation. They may put on the table scenarios that showcase particular areas about maybe standards being upgraded, or maybe particular components of the architecture that have limitations, or post some interesting challenges to the architecture that the ones brainstorming the scenario have not thought about. So, this is how all these key players contribute to the brainstorming session.
    Ron: Yeah, I think that's an important detail, because if you just said, "Oh, the architect and the development team are going to come up with their scenarios and present them to us." Well, of course the architectures are going to handle the scenarios they came up with, because they have already thought through all of those. And if all you do is kind of rubber stamp that, that's not a very good evaluation.

    But, this other method of sort of brainstorming with all the stake holders, captures a lot of different perspectives, business and domain experts, and infrastructure, and so forth. So, you're going to get a much more broad understanding.
    Dragos: You're going to get a broad understanding, and you will get far more evaluation scenarios that you will ever be able to complete for an evaluation. So, what happens at the end of this session - you want to prioritize. You have to recognize the fact that your time and resources are limited. And you want to prioritize, and reduce from the large number of brainstorm scenarios to a manageable set.

    First, you generalize and start you know putting them - typically I work with index cards on a big table, and organizing all these brainstorm scenarios into clusters. As you will, I have scenarios that talk about "flexibility." I have other sets of scenarios that talk about "integrability" with other systems. Maybe I have a set of scenarios that talks about "buildability."

    And then in these areas I start identifying scenarios that could be combined together. Because my end goal is to reduce the number of scenarios - of evaluation scenarios, while keeping the key traits that the stake holders have in mind.

    And then once I have combined these scenarios into a smaller set, then I run a prioritization session, where I give everybody a chance to put votes on scenarios so I could figure out what exactly is most important to them. And draw a line between the ones that are the top of the list, and somewhere you're going to see a drop, and then those that are below the line are not going to make it.

    So, that's the typical process how you fine tune, and you assemble this custom evaluation criteria. So to take a step back, and say this another way - the key idea behind scenario based architecture evaluation is, that I don't have a check list that I applied leanly to any system that happens to sound like something that someone else has come up with. Rather than doing that, I assemble a custom evaluation criteria with the people who have a vested interest in the architecture. They design the evaluation criteria, and then my job as an evaluator is to measure -how does the architecture perform with respect to this evaluation criteria?

    So its a custom-built if you will, evaluation criteria for the particular architecture that I'm evaluating. And the stake holders of the architecture design, and decide what this criteria's going to be. And me as an evaluator, I'm only measuring against what they want.
    Ron: So, the question occurs to me, you know how many scenarios is realistic? You know if we came out of that meeting and we go, "Oh, there's 50 scenarios we want to validate, " - that sounds probably like, "No way." Do you get through one scenario in an afternoon, or is it you know 10? I imagine it varies a lot, but can you give me just some general guidelines you think?
    Dragos: So, the most recent architecture evaluation that I performed from one of the groups here, and I wrapped up about three weeks ago, we came out of the brainstorming session, we had 72 scenario, I believe. And once we combined those scenarios, then we grouped them into areas. We had about 15 scenarios prioritized, in reverse order of priorities. And then from those 15, I think we managed to cover about six or seven.

    So, we completed the whole architecture evaluation in two weeks, with our not working full time just on architecture evaluation. So, I think if we would have worked full time, we could have done it in a week.

    But in the beginning, you spend quite a bit of time with the first scenarios. You're going to spend quite a bit of time doing the walk through, and mapping the scenario, and the architecture, and seeing what components are involved and what they do.

    But, as you carry on with the scenarios, you'll see from the third, the fourth, or whatever, you've already covered big chance of the architecture, and you pick up the last of it. You're covering the architecture, and how the different components play in scenarios is going to go much faster, because you've already covered a lot of this stuff in the previous scenarios. So, the first scenario's so slow, and you pick up speed as you go through them.
    Ron: Oh, OK. Wow! Well, that is just fantastic information Dragos. Thank you so much, for sharing with us today on ARCast.
    Dragos: My pleasure.
    Ron: Dragos Marinescu, people.

    [applause]

    Wow! What a great ARCast. You know this was just very timely for me, because in part of the Architect Training Course I'm putting together for the tech-heads this year. I'm doing it once again, the Pre-conference Sessions on Architecture, we have a session on "Validating Architectures, " and it's a tough one.

    You know to be honest with you the material I had last year was OK, but this year - much better, because we're taking some of these ideas that Dragos has shared with us today. And this idea of scenario based validation, and putting that together with...

Add Your 2 Cents