Wednesday, March 11, 2009

The Joys of Legacy Software #4: Creating pluggable logic

"...the application  I am looking at is an intranet-based line of business system written in C# / ASP.Net.  The client has been having difficulty in developing the system because insufficient thought has been given to the design beforehand..."

The Problem

The software application that I am upgrading needs to use different UI and business logic components depending on what service is being sold.  Furthermore, these services will be expanded and changed in the future and the application needs to be designed to meet these future changes with minimal impact.

The Current Situation

It’s not pretty.  Let’s face it.  You look at the application and you can see that it was designed to sell a given type of service.  Then, later, another service was added and the way of switching between the two was on the basis of “if” statements.  The view and (unfortunately) the calculation logic is wrapped up inside web user controls on the same web page and there is a decision point in the code to decide which of them to display:

if (service.ServiceType == ServicesTypes.ServiceTypeA)
{
                serviceTypeAControl.Visible = true;
}
else
{
                serviceTypeBControl.Visible = true;
}

I’m sure you’ve seen this a million times.  What’s wrong here?  It is actually worth asking the question because it’s an important one to ask, and the answer is all about extensibility.   If the code said:

maleUIComponent.Visible = (sex == sexTypes.Male);
femaleUIComponent.Visible = !( sex == sexTypes.Male);

This would be better, because we know the enumeration is static and we greatly simplify our code by not having to cope with future extensibility.  However, in the first scenario above we know that we have to cope with new services.  This leaves us with one of the first key rules of extensibility:

“If you correctly design an application to be extensible, it should not require any existing components to be changed in order to add new extensions.”

As you can see from the “if” statement above, if you use this pattern to extend the solution and add just one more service, you will require at the very least the following modifications:

·         Additional web user control
·         Change to the web form to add the control
·         Change to the code-behind to update the switching

In doing this there is every possibility that something could go wrong in the code and break functionality that is already in place.  Even though in this case the code is quite simple it illustrates the point.  If this was a highly complex UI such as Visual Studio it would be almost impossible to aggregate new components from different development teams because changes in the components would be happening all the time and you’d never arrive at a stable system.

The Solution

Abstraction.  That’s what helps you here.  Interfaces.  Finding what is common in the uncommon.  Let’s keep with our example application where we are producing quotes for a customer and look at the UI involved.  We have a UI that needs to display a different component depending on the service type in order to view a quote.  The first thing we do is define common base entities that describe our business model.  In this case we are dealing with quotes for a network service, so we might have classes like these:

public class Customer
{
                ...
}

public class Service
{
                public string ServiceName { get; set; }
                public string ServiceCode { get; set; }
                ...
}

public class Quote
{
                public Service SelectedService { get; set; }
                public Customer SelectedCustomer { get; set; }
}

Having created the base entities, the language of our business data, we can then move on to the next stage and create interfaces that describe common functionality.  For example, if we are looking at the creation and viewing of quotes we might have an interface such as this:

public class QuoteEventArgs : EventArgs
{
                public Quote Quote { get; set; }
}

public interface IQuoteView
{
                public Quote SelectedQuote { get; set; }
                public event EventHandler QuoteAccepted;
                public event EventHandler QuoteDeclined;
                public event EventHandler QuotePrintRequest;
}

If we’re doing this well, we will probably want to create a base class for our quote view controls.  Note that an interface defines the common behaviour or our components, and it is this that we should reference later, but interfaces cannot provide any common implementation.  That is done by a base class.  If you’re doing this already you might think this is really dull and obvious stuff, but I am posting this blog series precisely because applications are developed without design of this sort.

So, base class (in this case I am assuming that we will be using web server controls, but the same principles apply for user controls as well):

public QuoteViewBase : Control, IQuoteView
{
                public Quote SelectedQuote
                {
                                get { return this.ViewState[“quote”] as Quote; }
                                set { this.ViewState[“quote”] = value; }
                }
                // Events here .....

                // Protected methods etc...
}

Not wanting to labour the point, but we have a base class that can store our quote in the view state and already has the events wired in as well.  We’re good to go.  Almost.  Remember that we are trying to abstract our application, and so we need to make sure that a web page that is going to display these controls does not have to be affected by new functionality.  This is done by using that most powerful of design patterns – the factory pattern. 

The factory pattern allows us to create our controls whilst hiding the implementation of their creation from the rest of the application.  Therefore, even if the internals of the factory are changed there is no knock-on impact elsewhere.  Our factory might look like this:

public class QuoteViewerFactory
{
                public IQuoteView CreateQuoteViewer(Service service)
                {
                                // Implement the factory logic in here.....
                }
}

Powerful stuff.  I can almost sense you gasp as you absorb the implications of this.  It’s like being set free.  It’s like all those if statements and switch statements are ugly and belong to the bad old days.  It’s a new dawn.  In order to hook the controls in to our web page all we need to do is create a placeholder in the page, such as a panel, that we can create controls inside, and then we create a single control for our quote:

protected void Page_Init(object sender, EventArgs e)
{
                try
                {
                                Quote quote = Session[“CurrentQuote”] as Quote;
                                this.PlaceholderPanel.Controls.Clear();
                                IQuoteView control = new QuoteViewerFactory().CreateQuoteViewer(quote.SelectedService);
                                this.PlaceholderPanel.Controls.Add(control);
                }
                catch (Exception ex)
                {
                                // Log error etc.
                }
}

Now that we have made our service-agnostic base classes and interfaces, and we have modified our application to take advantage of them, we now need to create the specific implementations of the UI for the services we need to sell.  In order to take full advantage of this pattern, make sure that these service-specific components are placed in separate libraries from the core components.  So, moving on, first we create service-specific entities:

public class ServiceA : Service
{
                // Add specific properties
}

After this we will need to create our viewer for ServiceTypeA:

public class ServiceTypeAQuoteView : QuoteViewBase
{
                // Add UI components and any other internal logic
}

We’ve done it!  We have created a web page that loads in the appropriate control based on the service we have selected in our quote.  We can add new services and the existing web form is unchanged.  If we design the contents of the factory correctly all we have to do is add configuration and there should be no changes here either.  There are different ways of doing this, but they all center on the same concepts:

  • Create a list of all of the available classes that conform to the correct interfaces.  We need to have a key that we will look up on as well as the full type name of the class.
  • Given the key, e.g. the ServiceCode above, we look up in our configuration and get the type information of the corresponding class.
  • Using the type, we dynamically create an instance of the appropriate class, but outside of our factory the application is unaware of anything except its interface.  Note that there is a slight difference in web forms as when we create a control we should use the Page.LoadControl() method as our control needs to maintain a reference to the page that has loaded it, whereas if we are creating pluggable business logic we only need to use the System.Reflection.Activator.CreateInstance() method to create an object of the correct type.

Deployment

The fact that our UI application does not know about anything other than our base entities and interfaces is useful, but it does create a deployment issue if we’re not careful.  Our application will not reference the service-specific libraries for the very reason that they can be added, removed or replaced without breaking our application, but it also means that a standard compile of the solution will not place the appropriate assemblies into the bin folder of the application and so straight out of the box our code will fail. 

In order to get round this, from a development perspective we need to add build events onto our service-specific libraries so that we copy them into our bin directory so we can run them during development.  From a packaging and deployment perspective we need to make sure that all of the libraries get into our installer.  We have various options for this, but one of them may be to add more build events to copy the assemblies into a common “binaries” folder and then have a single build step to package all of these up and deploy them into the bin.

Review

That’s been a bit of a journey to say the least.  This has not been intended as a detailed tutorial of how to achieve pluggability – that may come later depending on whether I get requests for it - but I really wanted to get across the core concepts of using interfaces and factories to isolate extensible functionality and hence deliver what we were looking for at the start – a pluggable application that requires minimal intervention to extend the functionality.  The steps we went through were these:

·         Create the base entities to describe our business data
·         Create interfaces that describe our business behaviour
·         Create base classes that implement our interfaces and contain common behaviour
·         Create a factory to abstract the creation of the concrete classes from the application
·         Create our concrete business-specific functionality and place it in separate packages so that it can be deployed independently
·         Create a build process that allows us to add new business-specific extensions in a way that allows them to be packaged up with the rest of the application.

The Joys of Legacy Software #3: Layers and tiers

"...the application  I am looking at is an intranet-based line of business system written in C# / ASP.Net.  The client has been having difficulty in developing the system because insufficient thought has been given to the design beforehand..."

The Problem

In this post I will be using the example of a legacy application that I am re-architecting as a case study to illustrate some design traps, and I willbe pointing out some of the steps you can take to ensure you're in with a shout to get a maintainable application.

The focus of this post is about layers and tiers, and I'll be contrasting some of the mistakes I have seen with some better practice.

Some definitions

Layer:  When I talk about layers, I am usually referring to separate assemblies that have a reference hierarchy and together comprise an application.  Sometimes it is possible (although not neccessarily desirable) to combine logical layers within the same assembly.  

Tier:  When I talk about tiers I am usually referring to logical sections of an application that can run on physically separate machines, with the whole of the sub-applications on each tier comprising the whole application.   

Communication between layers

One of the main reasons that we separate an application into layers is to provide abstraction, i.e. we want layers higher in the stack to rely on the behaviour of their references without relying on their implementation.  This affords us several benefits:

a) In a project we can separate work among several developers without everyone needing to access the same piece of code.
b) We can unit test each layer separatey, so we can improve quality and more accurately identify where a problem originates.
c) We can re-use components elsewhere without needing to know their internal workings.

However, a golden rule of layered programming is this:  A layer must not require any knowledge of its calling layers or other layers higher in the stack.  If a layer needs to raise information back to its calling layer it raises an event and attaches the information.  There should be no direct calls or we end up with circular references.

This takes me onto one of the horrors that I encountered in my latest application.  In the business logic, there is a reliance on the HttpContext of the website.  This is bad in a numbe of ways, but primarily this means that it will not be possible to re-use this code either if we decided to produce a version of our application as a windows application, or if we wanted to host our logic in a service where we may not have the same richness of the end-user context (profile, identity etc) that we have in a website.

Communication between tiers

When we organise code into tiers, we are separating up portions of the application and allowing them to be run on separate physical machines, with communication over a network.  The reasons for this are:

a)  By distributing acros separate machines we allow an application to be scaled out for increased resilience and throughput.
b)  Using the principles of service orientation, we allow not only code to be shared, but the running instances of the code; our logic can be used by different systems, and through this we can build up a flexible enterprise.

The rules for dependency between tiers, especially when we use services, are not as clear cut.  One of the principles of good service oriented design are that we design service interfaces and the code of the service is abstracted behind these interfaces.  This is especially distilled in service orientation but the same principles also apply, even with such tier separations as SQL connections via OLEDB.  Some of our tiers will have references in a way that is conceptually similar to references between layers, and yet we cannot pass an event up to the calling tier because our tiers do not run in the same physical process.  However, what we do when we want to pass information back it to pulish a message. In a tiered architecture messages are what makes it all tick.  

Of course, there are some SOA patters where there is a communication protocol between applications that are bidirectional, but this isn't really within the scope of what I am talking about here.  

There is another horror story here, and it lies in the way the previous developer had integrated with other applications, notably the CRM system.  The develper used a cross-database call to access the CRM system  When my client decided to change their CRM system they did find that the application broke straight away.  If the application had relied on a CRM search service then the change of CRM system would only have required a replacement service and there would be no internal change in the application.  

The role of entities

Another thing I have been grappling with is the role of entities in the application I am working on.  Entities are a way of structuring data so that it can be communicated between layers or tiers:
  • Data entities allow communication between a database and a data access layer.  [Another horror story here:  I have come across so many issues when data entities are passed throughout an application, bound to a UI etc.  They should not really propagate outside of the data access layer.]
  • Business entities are the data that business logic understands.  Usually my interface for a data access layer will read / write business entities and the translation to the data entitie will take place inside the data access layer.  The business entities will also be what the business logic interfaces use.  As such, entities are not so much a layer, but something that allows layers to communicate.
  • Data contracts are used for service communications in the WCF model, and allow the client application to communicate with the service tier.  Again, usually there will be some translation between the business entities and the data contracts.  
  • View model entities are the entities used by a UI.  In an n-tier architecture these will be the entities that the UI binds on to.  There will need to be a translation layer between either the business entities or the data contracts, depending on whether we are calling business logic in the same process or between tiers or in separate processes.
It is important that we understand which entities we need to use, because we can then start to design applications that have less dependence and coupling between layers and tiers.  This allows us to build components that use the entities at the correct level of abstraction.

Where did it go wrong?

In the application I have been re-architecting the main problem lies in the fact that there is a single database, and a single entity model based on NHibernate.  These data entities are then used throughout the application, in the UI, in the business logic, in the data access.  This means that the UI is directly coupled t the database, and also directly coupled to other systems that the application integrates with.  Everything is coupled.  

In thinking about this I have realised more how important the entity model is in an aplication and how central to the development it is.  In my application I have decided to use the database entities as the core of my application going forward, i.e. as the prototype of my business entities, and I am then building a data access layer on LINQ with a translation layer between the LINQ entities and the business entities.  Having separated the business entities from the data access I have then put in a business logic layer, abstract the UI from the data and start to get some sort of order on the application.

I don't have enough time (i.e. customer's money) to rebuild the application any more than this, but the creation of a business entity model and a business logic interface is the core thing I needed to put in place to have a chance of making any sense of this application.

Next time.....

Now that I have put the first steps in place to provide some separation between my UI and my logic, next I need to look at how I can create pluggable logic so that I can handle the different services sold by my client in a generic way.

The Joys of Legacy Software #2: Taking over a legacy codebase

"...the application  I am looking at is an intranet-based line of business system written in C# / ASP.Net.  The client has been having difficulty in developing the system because insufficient thought has been given to the design beforehand..."

The Problem

This is often the case when you take over systems;  there is a system live on the client's premises, and you are presented with a dump of the code.  You hope that when you build the code and restore a backup of the databases that everything will work.  But will it?  What sort of state is it in?  Are all the references OK? 

In this post I will be describing some of the steps that you need to take as a consultant when you take over responsibility for someone else's code.  It's not pretty, but getting some of the basics right will help you a lot later on.

Step 1:  Getting a grip on the code

1a:  Get the code in your own repository

There's really nothing else you can do in this situation.  You just have to take the code, get it under your own control, build it and then do some serious regression testing against the system on the client's site.  There's really nothing else that can be done.  If any issues are found then you need to highlight them on day one and if neccessary you have to give the client a cost for fixing them. 

Fortunately for me, everything here was OK.  I use TFS 2008 as my code repository, and the first thing to do is to get the initial cut of the software into source control.  The structure I use is as follows:

Customer
 - Application
 -  - Release Version

What I have done in this case is to put the code I have taken over under a v1.0 branch and built it from there.  This then becomes the "reference" version  of the system.  The 1.0 branch will stay in source control as-is and will not be modified in any way and will be used for reference. 

1b:  Create your test environments

This is something to do now.  NOW.  In what is sometimes called the inception phase of the project.  Or "iteration zero".  Whatever you want to call it.  Before you get any further, create the test environments.  I am going to have two environments, one that is a "reference" installation that mirrors v1.0 of the system and the other is for my upgraded system, v2.0.  

In my case here I have a simple system, in order to deploy each instance of the application I only need a web server and a SQL server.  I also need Active Directory so I have created an AD installation as well that will be shared across both of my installations, so 5 servers in all.  I have created a set of test users in AD and I am ready to go.

1c:  Create the reference deployment

My v1.0 branch has the same code base as the existing system (allegedly), but since I only need to deploy this system once I am not averse to a bt of a manual deployment.  The key thing here is to get the correct binaries onto the test system and get the database restored.  Tweak the config files to connect to the correct servers and give it a spin.  

The main issues I have had in getting the system working are as ollows:
  • The data access layer uses NHibernate, so the connection string is in the NHibernate config file which is deployed iThrn the bin directory.  This needs to be modified with the correct connection string.  [Horror moment #1:  The connection string in the config file that I have received form the client has the username and the password included in the connection string!  What is more, commented out in the same file are the connection strings for both the UAT and the production environments!  All unencrypted!  Ouch!]
  • As often happens after restoring database backups you have to removed the old users and add in the new service accounts.  Note that it's always best to add a service account as a user on your database and then set this service account as your app pool identity in IIS, or use some other form of Windows authentication.
  • The system sends out emails to users during the workflow (another big future topic here), and of course the SMTP settings are included in th web.config of the web site, and so these need to be tweaked as well.
  • The system needs to save files as part of the process, and these are located on a share.  The web config also contains the location of where these files are stored to (yet another post in the making here on file storage, document libraries and file storage services). 
These tweaks are crucial for your future deployments.  You must note down all of them as they will become the environmental-dependent parameters that your build and deployment process will need to be able to configure on a per-environment basis later on.  

1d:  Regression test

This does what it says on the tin.  When you have got your reference installation you need to regression test it against the expected behaviour.  At this point log all known issues or you'll be held accountable for them later!!!  

Step 2:  Start to sort out the mess

After you have got owenrship of the code and you have been able to establish that the code you have been shipped is actually working, you now need to get to grips with it and sort it out so you don't have a totally flaky foundation.

2a:  Create the working branch

At this point we have created the v1.0 branch.  What we do now is branch the code to create the new working v2.0 branch so that we can start making changes to the system.  This means that if you get latest on the v1.0 branch you will always have a reference of what is there before.  All I would do at this point is to use TFS to create a branch of the v1.0 code.

2b:  Upgrade to latest version of .Net

This is the ideal opportunity to keept he system current.  If you now check out your complete v2.0 branch you should be able to open the solution(s) in Visual Studio 2008 and let it run the upgrade.  You don't need ot keep any logs or backups ebcause you will always have your v1.0 branch as a backup.

During this process I had a bit of a nightmare upgrading my website.  The intranet site was one of those awful .Net 2.0 websites there the code-behind is deployed along with the web forms.  The code behind had no namespaces on it and as the code is designed to JIT compile into temporary assemblies you do not get all of the classes you want to.  Also there is further code in the App_Code folder.  This is an evil in its own right.  If you have this on your dev server even when you have compiled it all into an assembly IIS will keep trying to compile it and you sometimes get namespace / type clashes because of this when the app runs.

What I ended up doing (and I had the luxury of not having a website that had too many pages and controls in it, perhaps 20 web forms and 40 controls) is to create a new web application from scratch and then migrate in the web forms one at a time, by creating new forms of the same names and then copying in first the markup and then the code behind.  This is a really tedious process, but in this way you know that you have got a fully compiled, namespaced application.  I also tend to rename the App_Code folder into just _Code or soemthing like that so as not to confuse my web server.  Remember to set all of the C# files as compile and not as content if thay have come across from the Ap_Code folder.

Step 2b:  Tidy up the references

When you have a web site in VS2005 and you add a reference what effectively hapens is that the referenced assembly gets copied into the bin directory of the web site and so it is then avilable to be used.  This is no use for a web application project as we must reference all of the assemblies so that the compiler can work out the references.  

When creating a code structure, what I usually do is as follows.  I will start from the working branch (in this example it is v2.0).

 - v2.0
 -  - Build [More about this in a later post]
 -  - Referenced Assemblies
 -  -  - Manufacturer X
 -  -  - Manufacturer Y
 -  -  - Manufacturer Z
 -  - Solutions
 -  -  - Solution A
 -  -  - Solution B
 -  -  - Solution C

So, having got a bin directory full of strange assemblies, I then copy them out into the referenced assemblies folder and then delete them from the bin.  I add file references to my projects in my solution for the obvious assemblies and then I use a trial-and-error process of compiling my application until I have referenced all of the assemblies I need to.  You'd be surprised how many of the assemblies that are int he bin directory are not direct references of the web site but have ended up then because they are referenced by a dependent project.

OK.  After this we are good on the references front.  We know what assemblies we have got to deploy and they are all in source control.  We're starting to get to the point where we could actually get someone else to do a get latest and see if they can build the beast.  In fact that's not a bad idea.  Go and ask someone right away.  You've nthing to lose.

Step 2c:  Tidy up namespaces and assembly names

You'd be surprised (maybe) how often you take a solution, build it and then find look in the bin and find that the assemblies all have different naming standards.  Look at the namespaces too and these might all be different.  It's a pain in the butt, but you need to decide on your namespace structure, go through each of the projects and set the namespaves and assembly names.

Step 2d:  Tidy up versions and strong names

While you're in there don't let me forget that this is also a good time to set your assembly versions.  If you are working in a v2.0 branch you might want to make all of your new DLLs v2.0.0.0 as well.  

And this is a good tome to create a key file and sign all of your assemblies.  Even the one in the website.  This is sometimes a moment of truth for your referenceda ssemblies as well, because you can't sign an assembly that references an unsigned assembly.  At Solidsoft we have been working with BizTalk so long, which requires you assemblies to be in the Global Assembly Cache (GAC) that we sign all of our assemblies as a matter of routine.  More seriously though, there is a code security aspect here as well.  You sign assemblies so that they cannot be tampered with.  You don't want to be in the situation where one of your assemblies is recompiled by a hacker with some malicious code in it, and signing removes this risk at a stroke.  

When I went through the signing process I found that the UIProcess application block hadn't been signed when it was compiled, and the codebase I had only referenced the DLL so I tokk a bit of a risk downloading the source code, signing it and replacing the assembly.  There was an issue with the configuration so I had to modify the config schema, but other than that everything went fine and I was all sorted out.

Step 3:  Create the build process

This is the time to get this right.  A good build and deployment process can seem like it sucks up no end of project time but you get the payback later when you are trying to stabilise and ship your system.

I use TFS and TFSBuild as my build environments now, although I have used MSBuild and CruiseControl.Net in the past.  I have two build definitions as a minimum.  The first just runs a compile on all solutions and runs unit tests but does not deploy.  This is triggered by checkin and so is effectively my "CI" (continuous integration) build.  My other build definition is a build / deploy and will push out my build onto a test environment.  I use InstallShield to create the MSIs, xcopy them over onto the test server and then use PSExec to install via a command line.

Review

This has been quite a long post, and in real life this part of my project was a real slog, but at the end of it we're in quite good shape now.  We have got a repeatable process for delivering our application and this is the minimum level you need to be able to ensure quality.  Once you are here, with a build automation process and automated deployment, you can start to overlay automated testing as well as the traditional manual UI testing, but without getting some of the quality in place at this stage you'll never get the results later.

I hope that this has been useful, and that this post has either given you some ideas on organising your solutions or has made you think why you organise your source as you do.  Next time I'll be digging into the application and seeing what's in there.  I'll warn you - some of it isn't pretty!

Sunday, March 08, 2009

The Joys of Legacy Software #1

Getting off the ground with the blogging, I'm not actually going to get started on some brand new bleeding-edge technology, but instead I'm going to be looking through something much closer to home for many people and that id the issue of the "Legacy Bespoke Line of Business Application".  

This is because one of the projects I am consulting on at the moment is for a company with just such a system, and I am trying to evolve the system from its present state into a high-quality architecture.  In doing this, I have found myself on an odyssey of thought.  I have had to try to imagine the thoughts of another developer and understand the decisions they have taken, and in doing so I have had to re-examine the way that I approach architecture and design, and look again at what is right.

One of the most difficult aspects of evolving an application rather than starting one from scratch is that of pragmatism.  When we start working on a "greenfield" solution we take a set of requirements for a system, apply patterns of what has worked before and quickly arrive at a skeleton architecture.  We then look at the detail of the requitements and create data models, look at the main services required and start to create service interfaces and APIs and then usually we will then embark on an iterative development cycle of some sort where we take a slice of functionality, develop the functionality and so prove the design by getting the architecture working end-to end (I'm not going to get into the agile debate just yet, but take it as read the whether we are using an agile methodology or a waterfall we usually will develop in incremental iterations).  

However, in a legacy application we do not always have this luxury.  The application may already be embedded in a business too deeply, and the cost of a rewrite may be prohibitive.  In this case we need to be able to deliver new or amended functionality within an existing application framework.  To complicate the matter further, we not only have to deliver functionality on our current project but we hope to continue to work with the same client and deliver future releases as well so we can't just hack away as we will only create more mess for ourselves to clean up later.  

Having found ourselves here we have to make decisions about how much of the system we need to reactor, how much we need to replace altogether, and to what extent we need to grit our teeth and live with what is there.  In doing this we have are continually doing cost/benefit/risk assessments.  In this series of blog posts I will be looking at some of these decisions that I am making, and the thinking that lies behind them.

The Business

I'm not going to disclose any private details of my client, but in order to get a feel for the same thing I am going to create a substitute client instead for the purpose of illustration.  Imagine a customer that sells cable TV (my real one doesn't, but it's close enough) and they have a system for providing sales quotes to their customers.  The key thing here is that they sell services, and the services may have a complex price structure.  There are both recurring and non-recurring (installation) costs associated with providing the service and when selling these services on to customers there is a balance to be struck between initial set-up charges and ongoing monthly charges.  If the setup cost is too high customers may be ut off, but if the setup costs are discounted then they must be clawed back over the life of the contract.

The Legacy Application

At this stage I will describe the legacy application in more detail.  It is an intranet-hosted C# /  ASP.Net line of business application, and the latest release was developed in Visual Studio 2005 (although I suspect that the first version was produced in .Net 1.1).  The database platform is SQL Server 2005.  The data access layer uses NHibernate, although it's quite an old version of NHibernate, and there is an extensive entity model in place already built onto the NHibernate libraries.  The UI uses the UI Process application block in order to implement an MVC (model-view-controller) design pattern.  There are a few 3rd-party controls used in the UI, which appear to have been harvested off the Internet as freebies.  

The Challenges

"That's not too old, what's the problem?"  That's the first thing that springs to mind when you read the technical overview of the system.  The technologies are not outdated and are all capable of being upgraded easily.  The problems lie in the way that they have been used.  In fact, this leads me to the first of my observations on this topic, and that is this:

"In solution architecture, technology choice is not usually the primary source of problems.  Problems lie in the way that the technologies are applied."

The customer has tried several times to develop their application in the past, but the previous projects had failed.  They have needed to be able to support new and different services for their sales department, but have so far been unable to.  This is because the original developers of the system did not give enough thought to their design in the first place, and so have found that when trying to develop the system they could not add new products without creating more and more spaghetti code.  What I have been doing is trying to sort out this mess.

I'm coming to the end of this blog post now, and I hope that you've found this useful, and over the next few posts in this series I will be going through some of the things that I have been doing to try to get to grips with this application and take it from a support nightmare and into a half-decent system.  Hopefully some of the readers of these posts may be experiencing some of the same issues and might find some useful ideas in this series.

Coming next:  Taking over a legacy codebase.

Blog Relaunch

After about two and a half years where I have been meaning to write more blog posts, I am now fulfilling my pledge to myself to get back to blogging.

During the course of my work I come across lots of new innovations and I think it's about time I shared some of these with the community at large.  I also come across plenty of horror stories and I'll be sharing some of these as well.

I'm going to try to put a post out at a minimum of once every two weeks, but we'll see how this goes.  :)