Sunday, September 28, 2014

MVC vs Web Form

When to Create an MVC Application

You must consider carefully whether to implement a Web application by using either the ASP.NET MVC framework or the ASP.NET Web Forms model. The MVC framework does not replace the Web Forms model; you can use either framework for Web applications. (If you have existing Web Forms-based applications, these continue to work exactly as they always have.)
Before you decide to use the MVC framework or the Web Forms model for a specific Web site, weigh the advantages of each approach.

Advantages of an MVC-Based Web Application

The ASP.NET MVC framework offers the following advantages:
  • It makes it easier to manage complexity by dividing an application into the model, the view, and the controller.
  • It does not use view state or server-based forms. This makes the MVC framework ideal for developers who want full control over the behavior of an application.
  • It uses a Front Controller pattern that processes Web application requests through a single controller. This enables you to design an application that supports a rich routing infrastructure. For more information, see Front Controller.
  • It provides better support for test-driven development (TDD).
  • It works well for Web applications that are supported by large teams of developers and for Web designers who need a high degree of control over the application behavior.

Advantages of a Web Forms-Based Web Application

The Web Forms-based framework offers the following advantages:
  • It supports an event model that preserves state over HTTP, which benefits line-of-business Web application development. The Web Forms-based application provides dozens of events that are supported in hundreds of server controls.
  • It uses a Page Controller pattern that adds functionality to individual pages. For more information, see Page Controller.
  • It uses view state on server-based forms, which can make managing state information easier.
  • It works well for small teams of Web developers and designers who want to take advantage of the large number of components available for rapid application development.
  • In general, it is less complex for application development, because the components (the Page class, controls, and so on) are tightly integrated and usually require less code than the MVC model.

Features of the ASP.NET MVC Framework

The ASP.NET MVC framework provides the following features:
  • Separation of application tasks (input logic, business logic, and UI logic), testability, and test-driven development (TDD). All core contracts in the MVC framework are interface-based and can be tested by using mock objects, which are simulated objects that imitate the behavior of actual objects in the application. You can unit-test the application without having to run the controllers in an ASP.NET process, which makes unit testing fast and flexible. You can use any unit-testing framework that is compatible with the .NET Framework.
  • An extensible and pluggable framework. The components of the ASP.NET MVC framework are designed so that they can be easily replaced or customized. You can plug in your own view engine, URL routing policy, action-method parameter serialization, and other components. The ASP.NET MVC framework also supports the use of Dependency Injection (DI) and Inversion of Control (IOC) container models. DI enables you to inject objects into a class, instead of relying on the class to create the object itself. IOC specifies that if an object requires another object, the first objects should get the second object from an outside source such as a configuration file. This makes testing easier.
  • Extensive support for ASP.NET routing, which is a powerful URL-mapping component that lets you build applications that have comprehensible and searchable URLs. URLs do not have to include file-name extensions, and are designed to support URL naming patterns that work well for search engine optimization (SEO) and representational state transfer (REST) addressing.
  • Support for using the markup in existing ASP.NET page (.aspx files), user control (.ascx files), and master page (.master files) markup files as view templates. You can use existing ASP.NET features with the ASP.NET MVC framework, such as nested master pages, in-line expressions (<%= %>), declarative server controls, templates, data-binding, localization, and so on.
  • Support for existing ASP.NET features. ASP.NET MVC lets you use features such as forms authentication and Windows authentication, URL authorization, membership and roles, output and data caching, session and profile state management, health monitoring, the configuration system, and the provider architecture.

http://msdn.microsoft.com/en-us/library/dd381412(v=vs.108).aspx

Saturday, August 23, 2014

CONTAGION—How market selloffs happen

http://fortune.com/2014/08/22/contagion-how-market-selloffs-happen/

Why does investor panic suddenly take hold? Here, the fourth installment of a new Fortune series on how things spread.

Twitter, the social media company that uses a bird for its corporate logo, was the canary in the coal mine.
In early February, on a single day of trading, Twitter’s TWTR 1.93%  stock fell 24%, to $50 from $64. The impetus, such as it was, was that ad sales were not increasing as fast as expected. The rest of the market shrugged it off. The same day the Dow Jones industrial average rose 188 to a new high.
A little more than a month later, technology stocks were in free fall, tumbling faster with every whiff of bad news. Facebook  FB 0.00%  shares, which had zoomed up 30% in late January and February, fell 20% in March and early April. Netflix’s  NFLX 1.51%  shares lost $100.
And it wasn’t just technology stocks that were tanking. Electric car company Tesla’s  TSLA 0.96%  shares fell as well. As did many biotechs.
What had changed? If anything, the economy seemed to have improved.
On April 10, the technology heavy Nasdaq Composite index lost 129 points, it’s worst single day drop in more than two and a half years—a swoon that sent the index below 4,000 for the first time in months. Many were saying this was just the beginning of a much larger selloff.
And then—just as quickly as it was—it wasn’t. Though Twitter shares haven’t fully recovered, technology stocks have been mostly rising ever since. Nasdaq is well above where it was before its April dive and is now just a tad off the nearly 14-year high it set on July 3.
contagion-selloff-infographic
What changed, again? Beats me.
I have been reporting on the markets since 1996. In that time, I have covered two major market panics—the dot.com bust and the financial crisis—and dozens of minor ones. All of them have come without much warning, except in retrospect. Most go unnoticed until a good deal of the damage has already been done. And they disappear when you least expected it.
The changes in investor mood often happen in a flash—an infection of outlook that can seem as swift as an epidemic of flu. How and why does it happen? And are there similarities between such market mood swings and the way other things—pathogens, fashion trends, gossip—spread?
In a new Fortune series on Contagion, my colleagues and I set out to explore this murky process, investigating the spread of things as varied as the MERS-coV virus (here and here), M&A rumors, book sales and even a social phenomenon, ahem, such as “the selfie.”
In this same vein, I dove in to a mystery that has been stumping market-watchers and financial journalists—myself included—for ages: What causes investors to go from optimistic to nervous to panicked and back?
The answers that economists have come up with have been mostly unsatisfying or disproven. For a while, many settled on the notion that the market was essentially random, and left it at that.
But then came the financial crisis, which nearly swallowed the entire economy. Again the search for the causes of financial contagions, and how to contain them, became a hot topic.
The good news is that we have new research on what causes market panics, including a major study that came out in just the past few months. The bad news is this likely won’t end the debate either. Here’s what we know, and don’t know, about investors and their freak-outs.
Ask most professional investors and market strategists why stock panics happen and you will mostly get the same answer: Stock prices get too high.
“It’s called exhaustion in market terminology,” says Fred Hickey, who is the long-time editor of the widely followed newsletter, The High-Tech Strategist.
The realization that stocks were significantly overvalued appears to be what led to the 2000 tech bust. In March 2000, Barron’s published an article detailing how fast dotcom companies were running out of cash, and their stocks were overvalued. (Fun side note: Henry Blodget, then a technology stock analyst, said the math of the article was wrong.) That was long before anyone realized the accounting tricks that dot.com companies, and others, like Enron, were playing.
But we didn’t need to know any of that stuff to panic. In the three days following the Barron’s article, the Nasdaq index fell nearly 500 points, or 10%.
Hickey says the same was true in the run up to the financial crisis. Compare stocks to corporate sales or earnings, and it was clear that the market was overvalued in October 2007. And earlier this year, the prices of technology stocks, when you factor in their current growth projections, were trading at higher multiple of earnings than they were back in 2000.
“Why anyone would expect anything other than a decline is beyond me,” says Hickey.
Market value explanations, while a big deal for Wall Streeters, never really held much sway with financial professors. A little over a decade ago, Nobel Prize-winning economist Edward Prescott did a study of stock prices in 1929, before that year’s giant stock market crash. His conclusion: The market didn’t crash in 1929 because stocks were overvalued. If anything, based on what we know now, the market was cheap.
And while there were plenty of stocks that turned out to be worthless when the dotcom market crashed, others were cheap even at the peak. Amazon.com’s stock  AMZN -0.40% , for instance topped out at a split adjusted $89 in 2000. It now trades for $328.
Technology stocks, too, are just as expensive as they were a few months ago. Witness the phenomenon of hitcher app Uber, which was valued in early June at an eye-popping $17 billion—and that was before it raised another $1.2 billion in venture funding. That would make it worth more than Hertz (market capitalization: $13.5 billion), on virtual dotcom paper, that is. This all sounds bubbly, and yet, no one is running to sell.
The Barron’s article in 2000 was not the first time the financial media had raised the warning sign about tech stocks. Yet, for some reason that article stuck.
“The best we can say is that what causes selloffs is changes in investor sentiment,” says Harvard professor Malcolm Baker. “But why that change occurs we really don’t know.”
The theory that most economists prefer to explain stock market selloffs—probably because it comes from their own playbook—starts with supply and demand.
And it explains many market bubbles and busts in new technologies. Often when investors catch wind of an exciting new technology like, say, the Internet—or, today, social media and electric cars—there are few, if any, publicly traded companies.
Investors will pay up for those shares if it’s the only way to get in on the trend. But as more companies that do the same thing go public, or the ones that are public sell more shares, the supply of available shares increases. And as supply rises, prices tend to fall.
And there’s some evidence that’s what happened with technology stocks earlier this year. More than 45 technology companies went public in 2013 and early 2014, including Twitter, a number of other social media companies and game maker King.com  KING 4.65% , which owns the obsessively popular Candy Crush. What’s more, each one of these companies have lockup periods—a time, usually six months, after the IPO—after which insiders can sell shares. That increases the number of shares investors have to gobble up. The expiration of Twitter’s lockup alone made another 500 million shares of social media stock available for trading.
But that doesn’t explain market panics, like we saw earlier this year. If it were all about supply and demand, you would expect the selloffs to be measured and gradual. It also doesn’t explain why more shares of Twitter or King.com would cause investors to dump their holdings of Tesla or a slew of biotech companies, which also sold off in the spring. (In fact, Twitter’s lock-up expired in early May, when technology stocks were recovering.)
Nor does the theory really explain the financial crisis. Houses and mortgage bonds weren’t new, though the supply of both definitely increased during the run-up to the housing bust.
And neither the valuation explanation nor the supply argument really explains why this year’s tech stock selloff didn’t spread. Non-tech stocks, after all, look expensive, too. And many large companies have spent the past few years selling debt. Yet, outside of tech, stocks have continued to rise without barely a hiccup this year.
Tech investor Kevin Landis of Firsthand funds says part of the problem is there’s no obvious leader for tech investors to use as an anchor. A decade ago, investors would look to movements in Microsoft  MSFT -0.15%  or Intel  INTC -0.60% . But Microsoft has stumbled and Intel isn’t the powerhouse it used to be. “I don’t think you would say as Telsa goes, so goes the market,” says Landis. “Facebook is going to get there but it’s not there yet.”
That leads us to the latest theory of why market panics happen. In academic circles, at least, there’s been resurgence in interest in a theory that was popular a decade ago but had been dismissed by believers in efficient market theory. But now that the financial crisis has discredited the efficient market hypothesis—clearly houses and mortgage bonds were mispriced—alternative theories are making a comeback.
The theory is called the consumption-based asset pricing model. Most theories of how the stock market works are based on the idea that investors sit around thinking about what Amazon or Apple  AAPL 0.74%  might be worth. Together, by buying and selling stock, Mr. Market comes to some conclusion.
But the consumption-based asset pricing model says that’s not the way it works at all. Investors, actually, spend very little time thinking about whether a company’s shares are undervalued or overvalued. Instead, most investors make their investment decisions based on how much money they have and when they will spend it.
“Something that has no cash flows now but a lot in the future, I would be nervous about in a period when all of a sudden I think I am going to need cash,” says Chris Brightman, who leads the research and investment management team at investment advisor Research Affiliates.
In early January, Sydney Ludvigson, an economics professor at New York University who has been a defender of the consumption-based theories, co-authored a study that found that 75% of short-term stock price movements have to do with changes in investors appetite for risk.
It turns out the theory does a pretty good job of explaining the recent tech selloff. In the first quarter, corporate profits weren’t any worse than they were last year. But the economy did slow. And that may have made individuals slightly more concerned about how much cash they had, and whether they would need it sooner than they thought. And if you are worried about needing cash soon, you are probably less likely to invest in companies like Tesla and Twitter, which have a lot of potential, but are not yet producing profit, or a lot of it anyway. But you might still not be concerned about investing in Berkshire Hathaway  BRK.B -0.77%  or GE  GE -1.06%  or Walmart  WMT 0.24% .
By early April, the economy was starting to pick up again. And when it did technology stocks rebounded.
Nobel Prize-winner Prescott kind of agrees. He thinks the main thing that drives stock prices is tax policy. If investors think they are going to have to pay more in taxes, they will invest less. That doesn’t really explain why tech stocks dropped this year, or in 2008—but the point is, stock market panics are not only driven by a fear of stocks.
And that seems like a pretty good answer as to why people all of a sudden go running for Wall Street’s exits, at least for now.
contagion-Selloff

For more inside the world of contagion, see

 CONTAGION—How things spread. Introducing a new Fortune series

• Part 1: How a bat virus became a human killer

• Part 2: How the MERS virus made it to Munster, Indiana

• Part 3: How M&A rumors spread

• Part 4: How market selloffs happen

• Part 5: How Americans fell in love with a 685-page economics treatise

• Part 6: How the “selfie” became a social epidemic

• Part 7: How studying Twitter became an academic craze

Wednesday, July 16, 2014

Waterfall model

The waterfall model is a sequential design process, used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation,Analysis, Design, Construction, Testing, Production/Implementation, and Maintenance.
The waterfall development model originates in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development.[1]
The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956.[2] This presentation was about the development of software for SAGE. In 1983 the paper was republished[3] with a foreword by Benington pointing out that the process was not in fact performed in a strict top-down fashion, but depended on a prototype.
The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce,[4][5] although Royce did not use the term "waterfall" in that article. Royce presented this model as an example of a flawed, non-working model.[6] This, in fact, is how the term is generally used in writing about software development—to describe a critical view of a commonly used software development practice.[7]
The earliest use of the term "waterfall" may have been a 1976 paper by Bell and Thayer.[8]

Model[edit]

In Royce's original waterfall model, the following phases are followed in order:
  1. Requirements specification resulting in the product requirements document
  2. Design resulting in the Software architecture
  3. Construction (implementation or coding) resulting in the actual software
  4. Integration
  5. Testing and debugging
  6. Installation
  7. Maintenance
Thus the waterfall model maintains that one should move to a phase only when its preceding phase is reviewed and verified. Various modified waterfall models (including Royce's final model), however, can include slight or major variations on this process.[citation needed]

Supporting arguments[edit]

Time spent early in the software production cycle can lead to greater economy at later stages. McConnell shows that a bug found in the early stages (such as requirements specification or design) is cheaper in money, effort, and time to fix than the same bug found later on in the process.[9] To take an extreme example, if a program design turns out to be impossible to implement, it is easier to fix the design at the design stage than to realize months later, when program components are being integrated, that all the work done so far has to be scrapped because of a broken design.[citation needed]
This is the central idea behind Big Design Up Front and the waterfall model: time spent early on making sure requirements and design are correct saves much time and effort later. Thus, the thinking of those who follow the waterfall process goes, make sure each phase is 100% complete and absolutely correct before proceeding to the next phase. Program requirements should be set in stone before design begins (otherwise work put into a design based on incorrect requirements is wasted). The program's design should be perfect before people begin to implement the design (otherwise they implement the wrong design and their work is wasted), etc.
A further argument for the waterfall model is that it places emphasis on documentation (such as requirements documents and design documents) as well as source code. In less thoroughly designed and documented methodologies, knowledge is lost if team members leave before the project is completed, and it may be difficult for a project to recover from the loss. If a fully working design document is present (as is the intent of Big Design Up Front and the waterfall model), new team members or even entirely new teams should be able to familiarize themselves by reading the documents.[10]
Some waterfall proponents prefer the waterfall model for its simple approach and argue that it is more disciplined[citation needed]. The waterfall model provides a structured approach; the model itself progresses linearly through discrete, easily understandable and explainable phases and thus is easy to understand; it also provides easily identifiable milestones in the development process. It is perhaps for this reason that the waterfall model is used as a beginning example of a development model in many software engineering texts and courses.[citation needed]
It is argued that the waterfall model and Big Design up Front in general can be suited to software projects that are stable (especially those projects with unchanging requirements, such as with shrink wrap software) and where it is possible and likely that designers will be able to fully predict problem areas of the system and produce a correct design before implementation is started. The waterfall model also requires that implementers follow the well-made, complete design accurately, ensuring that the integration of the system proceeds smoothly.[citation needed]

Criticism[edit]

Advocates of Agile software development argue the waterfall model is a bad idea in practice—believing it impossible for any non-trivial project to finish a phase of a software product's lifecycle perfectly before moving to the next phases and learning from them.[citation needed]
For example, clients may not know exactly what requirements they need before reviewing a working prototype and commenting on it. They may change their requirements constantly. Designers and programmers may have little control over this. If clients change their requirements after the design is finalized, the design must be modified to accommodate the new requirements. This effectively means invalidating a good deal of working hours, which means increased cost, especially if a large amount of the project's resources has already been invested in Big Design Up Front.[citation needed]
Designers may not be aware of future implementation difficulties when writing a design for an unimplemented software product. That is, it may become clear in the implementation phase that a particular area of program functionality is extraordinarily difficult to implement. In this case, it is better to revise the design than persist in a design based on faulty predictions, and that does not account for the newly discovered problems.[citation needed]
In Code Complete (a book that criticizes widespread use of the waterfall model), Steve McConnell refers to design as a "wicked problem"—a problem whose requirements and limitations cannot be entirely known before completion. The implication of this is that it is impossible to perfect one phase of software development, thus it is impossible if using the waterfall model to move on to the next phase.[citation needed]
David Parnas, in A Rational Design Process: How and Why to Fake It, writes:[11]
“Many of the [system's] details only become known to us as we progress in the [system's] implementation. Some of the things that we learn invalidate our design and we must backtrack.”
Expanding the concept above, the project stakeholders (non-IT personnel) may not be fully aware of the capabilities of the technology being implemented. This can lead to what they "think is possible" defining expectations and requirements. This can lead to a design that does not use the full potential of what the new technology can deliver, or simply replicates the existing application or process with the new technology. This can cause substantial changes to the implementation requirements once the stakeholders become more aware of the functionality available from the new technology. An example is where an organization migrates from a paper-based process to an electronic process. While key deliverables of the paper process must be maintained, benefits of real-time data input validation, traceability, and automated decision point routing may not be anticipated at the early planning stages of the project. Another example is switching from offline or stand-alone systems to online or comprehensive systems.[citation needed]
The idea behind the waterfall model may be "measure twice; cut once," and those opposed to the waterfall model argue that this idea tends to fall apart when the problem constantly changes due to requirement modifications and new realizations about the problem itself. A potential solution is for an experienced developer to spend time up front on refactoring to consolidate the software, and to prepare it for a possible update, no matter if such is planned already. Another approach is to use a design targeting modularity withinterfaces to increase the flexibility of the software with respect to the design.[citation needed]
Due to the types of criticisms discussed above, some organizations, such as the US Department of Defense, now have a preference against waterfall type methodologies, starting with MIL-STD-498 "clearly encouraging evolutionary acquisition and IID".[12]

Modified models[edit]

In response to the perceived problems with the pure waterfall model, many modified waterfall models have been introduced. These models may address some or all of the criticisms of the pure waterfall model.[citation needed] Many different models are covered by Steve McConnell in the "Lifecycle Planning" chapter of his book Rapid Development: Taming Wild Software Schedules.[13]
While all software development models bear some similarity to the waterfall model, as all software development models incorporate at least some phases similar to those used in the waterfall model, this section deals with those closest to the waterfall model. For models that apply further differences to the waterfall model, or for radically different models seek general information on the software development process.[citation needed]

Controversy[edit]

Although many references to the waterfall model exist, and while many methodologies could be qualified as 'modified' waterfall, the key aspect of waterfall as being a non iterative process, and lack of citations regarding the actual use of such a non iterative waterfall model have made critics [14] pose the thesis that the waterfall model itself, as a non iterative development methodology, is in fact a myth and a straw-man argument used purely to advocate alternative development methodologies.

See also[edit]

References[edit]

  1. Jump up^ Benington, Herbert D. (1 October 1983). "Production of Large Computer Programs". IEEE Annals of the History of Computing (IEEE Educational Activities Department) 5 (4): 350–361. doi:10.1109/MAHC.1983.10102. Retrieved 2011-03-21.
  2. Jump up^ United States. Navy Mathematical Computing Advisory Panel. (29 June 1956), Symposium on advanced programming methods for digital computers, [Washington, D.C.]: Office of Naval Research, Dept. of the Navy, OCLC 10794738
  3. Jump up^ Benington, Herbert D. (1 October 1983). "Production of Large Computer Programs". IEEE Annals of the History of Computing (IEEE Educational Activities Department) 5 (4): 350–361. doi:10.1109/MAHC.1983.10102. Retrieved 2011-03-21.
  4. Jump up^ Wasserfallmodell > Entstehungskontext, Markus Rerych, Institut für Gestaltungs- und Wirkungsforschung, TU-Wien. Retrieved on 2007-11-28 fromhttp://cartoon.iguw.tuwien.ac.at/fit/fit01/wasserfall/entstehung.html.
  5. Jump up^ Royce, Winston. "Managing the Development of Large Software Systems".
  6. Jump up^ Royce, Winston (1970), "Managing the Development of Large Software Systems", Proceedings of IEEE WESCON 26(August): 1–9
  7. Jump up^ Conrad Weisert, Waterfall methodology: there's no such thing!
  8. Jump up^ Bell, Thomas E., and T. A. Thayer. Software requirements: Are they really a problem? Proceedings of the 2nd international conference on Software engineering. IEEE Computer Society Press, 1976.
  9. Jump up^ McConnell (1996), p. 72, estimates that "...a requirements defect that is left undetected until construction or maintenance will cost 50 to 200 times as much to fix as it would have cost to fix at requirements time".
  10. Jump up^ Arcisphere technologies (2012). "Tutorial: The Software Development Life Cycle (SDLC)". Retrieved 2012-11-13.
  11. Jump up^ "A Rational Design Process: How and Why to Fake It", David Parnas (PDF file)
  12. Jump up^ Iterative and Incremental Development: A Brief History, Craig Larman and Victor Basili, IEEE Computer, June 2003
  13. Jump up^ McConnell, Rapid Development: Taming Wild Software Schedules (1996), pp. 143-147, describes three modified waterfalls: Sashimi (Waterfall with Overlapping Phases), Waterfall with Subprojects, and Waterfall with Risk Reduction.
  14. Jump up^ A Waterfall Systems Development Methodology … Seriously? David Dischave 2012

Bibliography[edit]

This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.

External links[edit]