Posted by: Stephen Oakman | July 15, 2010

A quick look at NSubstitute

For a side project I’ve started to use NSubstitute which is a new mocking framework for .net. We currently use Moq for pretty much all of our projects but NSubstitute looks really nice so I thought I would try it out.

The syntax is really clean, with the call to Substitute.For<IThingToMock>() returning an IThingToMock. This means you can create your mock and then pass it in to your class you are testing without having to call a .Object like Moq does.

But where it really helped me out was with a particular situation. I’m dabbling with database migrations at the moment where I’m using a Migration attribute which I scan for. When scanning for these migration attributes I wanted to make sure that they were stored in order so in Moq I have this test:

var migrationStoreMock = new Mock<IMigrationStore>();
var storedMigrations = new List<IMigration>();
migrationStoreMock.Setup(s => s.StoreMigration(It.IsAny<IMigration>()).Callback<IMigration>(c => storedMigrations.Add(c));
var migrator = new Migrator(migrationStoreMock.Object);
migrator.Scan<TypeThatHasTestMigrations>();
storedMigrations[0].Version.ShouldEqual(1);
storedMigrations[1].Version.ShouldEqual(2);
storedMigrations[2].Version.ShouldEqual(3);
storedMigrations[3].Version.ShouldEqual(4);

What has always bugged me with this is that it ‘sort of’ follows the Arrange Act Assert pattern, but the callback to add to a list is really arranging ready for the asserts – so it really reads Arrange, Prepare For Assert, Act, Assert. Also, the assert isn’t on the mock itself, so there’s some indirection on what I’m asserting on.

Now for the NSubstitute equivalent:

var migrationStore = Substitute.For<IMigrationStore>();
var migrator = new Migrator(migrationStore);
migrator.Scan<TypeThatHasTestMigrations>();
migrationStore.Received().StoreMigration(Arg.Is<IMigration>(x => x.Version == 1));
migrationStore.Received().StoreMigration(Arg.Is<IMigration>(x => x.Version == 2));
migrationStore.Received().StoreMigration(Arg.Is<IMigration>(x => x.Version == 3));
migrationStore.Received().StoreMigration(Arg.Is<IMigration>(x => x.Version == 4));

With this version there’s a much clearer Arrange Act Assert flow going on. The assert is on the mock itself and doesn’t involve a lambda call to achieve this (which is quite nice). The other benefit is that there’s no .Object call when passing the mock into the class under test.

What may seem noisy is the Arg.Is predicate – but that for me is ok. What’s more of an annoyance for me is I’ve lost the ShouldEqual.

But the fact that I can just call into the mock, through the Received extension method and then use it in a more natural way is very appealing.

It is early days for NSubstitute, with some of the messages it reports back needing work but that’s a known issue and for something in so early a stage it’s a very minor point – and one that I would love to contribute to resolving.

Advertisement
Posted by: Stephen Oakman | July 14, 2010

TeamCity, VS2010 and Embedded Resources

We hit an issue recently with switching to Visual Studio 2010 for all our projects and running the 2010 solution file directly from the TeamCity build runner. The issue was highlighted by a failing test which only failed on the build server.

To give more context, we have projects started in Visual Studio 2008 being built using TeamCity. When moving to Visual Studio 2010, we introduced a new 2010 solution with references to all the existing projects. We then continued to use the 2008 solution for TeamCity builds. Although not ideal, it worked.

With a recent release of TeamCity, and with some pain with introducing new projects to the 2010 solution (and forgetting to add them to the 2008 solution – oops) we bit the bullet and worked towards building directly from the 2010 solution.

Anyone that has gone down this same path will have hit the issue regarding TeamCity not being able to find MSBuild. There’s a couple of ways around this, one of which is to install Visual Studio 2010 on the build server. Don’t worry – that thought never even entered into our minds. Another solution (and the one we went with even though I was trying to fight it and override the TeamCity environment varaibles) is to install the windows and .net sdk and the run the command line options (described here).

This enabled the solution to actually compile, but we got that one failing test.

The fix we used here was to set a system property in TeamCity to specify the location of AL.exe. In TeamCity, navigate to your build project, select Edit Configuration Settings and then Properties and Environment Variables. Then add a system property called ALToolPath with the value ‘C:\Program Files\Microsoft SDKs\Windows\v6.0A\Bin’. This then uses the right version of AL.exe for a .Net 3.5 project.

Hopefully this will save someone some time.

Just wasted a bunch of time looking at a problem which, initially pointed at a permissions problem in our good old friend IIS7…

The error we got was a 500.19 error with the message:

The requested page cannot be accessed because the related configuration data for the page is invalid.

Well that’s a red herring. It turned out we had a Virtual Directory set up with the same name as an MVC controller – removing that sorted everything out!

Posted by: Stephen Oakman | February 9, 2010

The Migrate Now approach

In any big system that has formed the basis of a companies core business it reaches a point where compromises are made and these eventually come to a head.

These compromises usually manifest as divergence within a code base (we’ll branch this, just temporarily) or as duplication (a quick copy and paste for now). All of these are invariably made with good intentions and pretty much always to support the business – but for one reason of another they never get fixed again.

Add layer upon layer of these compromises and one factor is almost certainly inevitable – there will be a big refactor or rewrite project.

Now that we’ve got to this point and we have some understanding of the history of how we got here we seem to have 2 options (3 if we include carrying on as we are). So which one?

The Big Rewrite

This is the one that always seems so appealing to everyone. It gives us an opportunity to fix everything that we have learnt from past mistakes. We now understand the system much more and can enable all the features everyone wants.

But The Big Rewrite approach tends to fail. As developers we underestimate the size of the system (even though we may know the system very well) and we overestimate our abilities. This, coupled with the businesses need to still sell and use the system means we end up in a race condition. We are racing to make the new system as feature complete as possible while still supporting, maintaining and extending the old system. Often with dedicated teams on each system (with the ‘elite’ team being the big rewrite team).

I’ve yet to see a big rewrite successfully pay off. I have however seen a big rewrite cause many more problems than it attempts to resolve, often resulting on some customers on the new but with less features system and other customers on the older but more feature complete system. Sometimes this divergence goes even further (a branch of the not yet complete big rewrite system).

Refactor

As you can probably guess, this is the approach I favour. Rather than rewrite instead refactor the system as you go, but even then this approach can sometimes end up in a similar situation as the big rewrite approach. One of the main reasons for this is that to successfully refactor we need to make sure we have a crucial enabler in place – the migration.

If we do refactor, or we do manage to pull off the big rewrite (and I’m sure it does happen) then what? We now find we have all these customers on our old system and we want to migrate them to the new system. If we can’t do this then, after all, what’s the point?

But the migration itself is now seen as a big risk. We have to migrate customers data across to new databases, invariably stopping customer access to the system as we do it, which means picking a convenient timeframe to do it in (weekends being the obvious choice) and what do we do if it fails?

I’ve seen this big migration at the end cause many good intentions to go sour. What has happened in the past is that many customers never migrate across, for whatever reason, and now the new system and the old system continue on – dividing further and further apart as they go resulting in yet another code base to maintain.

Migrate Now

Which leads on (eventually) to the point of this post. Before attempting either rewrite or refactor, see if there’s a way of actually migrating to the new system now. Yes, right now.

It seems implausible as we haven’t even started on a new system yet but it may actually be much simpler than you realise (and in many ways is more of a mindset change than anything else).

In a web system recently, the simplest way to achieve this change would be to introduce a proxy (or what we were calling a ‘shim’) which would do nothing more than to proxy requests down to the underlying system.

But actually, it did much more than this.

The mindset change

By migrating now we are now on the new system. We have migrated all our customers with no loss of service and it just works (assuming it does that is). All we are doing is proxying the requests through.

But we now don’t have multiple teams, one on support and one on the new system, we just have the one new system team as we don’t have any customers on the old system.

We’ve also avoided the big migration at the end – with all the inherent risk, and more importantly, without leaving some customers on the old system, so no divergence. Now we have one code base going forward, with all our customers already using it.

What now?

Ok – so technically nothing has changed. We still have 2 or more systems underneath even if we have one system from above. So what is the next step? One next step would be to start identifying static content that is shared between systems. This could be just a help page, or a contact us page, or something of that nature. Whatever it is, we can now promote that duplicated static content up to the ‘shim’ itself.

This small change forces us to make some decisions. Where do we put this static content? How do we serve it up through the ‘shim’? Hopefully these will be fairly easy to solve issues. That’s the point though – think small. Small baby steps.

Once it’s promoted up we can switch the proxy to redirect to the new shim provided content. If there are any issues we should be able to redirect the proxy back to the old systems so we are only ever a small step away from rolling back to how it was before.

Once any kinks are ironed out and it is going through the proxy to the shim hosted content ok – delete the now redundant static content from the old systems. By deleting the redundant content we’ve, very slightly, reduced what we need to port across.

More baby steps

Next, look at ‘almost’ static content. Some content – again, something like help pages or contact us pages, that is almost shared but not quite the same. Look at how a basic templating engine (string template, spark or nvelocity for instance) could be used along with a basic model to hold the changes. Now again deal with the same small questions such as where the templating engine will sit, how will the content be served up, where do we get the model data from (and what is the identifier for each of the ‘below the shim’ systems). These should again be simple enough questions to find an answer for. Again, the point here is still very small changes.

Now again, when this almost static content is promoted to the shim, and the redirect through the proxy to the shim hosted content is working, delete it from the old systems.

At this point we’ve managed to nibble at the edge of our problem but by now the mindset change should have been reinforced. We are looking at how we can promote small parts of the existing systems into the shim itself, and once complete, deleting those small parts from the old systems. If we ever get a problem, we should only be a small redirect away from using the old system again until we resolve any issues and can redirect to the new parts of the system.

A bigger nibble

I was going to continue this on but its already got way too wordy as it is (and it’s close to midnight) so I will continue these thoughts on in another post. Now, given my current trend in posting, this may be in a few months time but bear with me 🙂

Posted by: Stephen Oakman | February 9, 2010

Procrastinating…

I’ve just tweeted about how I’m procrastinating. I’m looking at our website and I want to change it as, well, let’s be honest – it sucks.

I’ve had my eye on KooBoo, a .net (and more importantly, an mvc) based cms which looks very impressive. I’m also looking at ruby on rails using RubyMine as the IDE. Both good excuses to try out something different. But who am I kidding, the problem with our website isn’t technology based. What it lacks is design and content.

So now I’m procrastinating by posting this instead. At least it’s content 🙂

Posted by: Stephen Oakman | December 14, 2009

AltNetBeers Cambridge – The Xmas Edition

Tomorrow night (15th December) I’m facilitating AltNetBeers Cambridge hosted at The Tram Depot from 7pm.

We plan on this being a regular monthly event and the AltNetBeers format is a great way of fostering some intense discussion, all driven by the people that come along. I’ve not attended many of the London one’s (actually, only 1, which was part of the Alt.Net UK conference weekend) but it was fantastic and I hope that everyone that attends to tomorrow nights evening will have just as much fun.

We (The Agile Workshop) would also like to facilitate some open coding days as our new offices (more on that later) has the space to be able to do this. I’m aiming to discuss this at some point tomorrow evening to get an initial idea on who might be interested.

So please, if you are free tomorrow evening, come along. It should prove to be a fun evening.

Posted by: Stephen Oakman | October 19, 2009

It has to be the right ‘red’

When following the mantra of ‘Red, Green, Refactor’ the ‘Red’ step is often overlooked.

Even when getting to the ‘Red’ failing test step, it is important to check that it’s the right ‘red’. It’s far to easy to produce a failing test which fails for the wrong reasons. A quick check that it’s failing for the reasons you expect is worth it.

If you don’t check the failing test and skip straight to the implementation to make the test pass then you’ll keep getting a failing test. When this happens there’s a natural tendency for tunnel vision to creep in as you go deeper and deeper into finding out why it’s not working before an ‘ahha’ moment where you realise where the error is.

I speak from experience after being at that point far too many times. Not that I want to admit it as each time is a personal head slap moment as I realise how insanely foolish I’ve been.

So please, take the time to stop at red and check it’s the right shade of red before moving on to green. Forget the time it will save in the long run, and instead just be glad that you won’t feel the same shame and humiliation that I’ve felt in the past!

Posted by: Ronnie Barker | September 23, 2009

Quick hack to configure directory security on MVC Views with IIS

Just had a little issue where we wanted to secure up an ASP.NET MVC app using directory security but allow a single action through with anonymous auth. Unfortunately there is nothing to click on in IIS since MVC’s routes are decided dynamically and not from the file system.

I’m sure there is a ‘proper’ way to edit the IIS metabase – but we created the folder structure that represents the action under the main folder, set up the auth on that and then deleted the folder. The meta-base details persits and now everything works fine!

Posted by: Stephen Oakman | September 14, 2009

We are giving a talk at Bristol on September 22nd

We are giving a talk at the .NET Developer Network in Bristol on Tuesday September 22nd. The focus of the talk is going to be a hands on TDD session where we will be taking a story and driving through the story using TDD. Along the way we will be showing how tools such as ReSharper really help support this process.

The intention is for this to be very much an audience driven session. Ronnie and myself will be taking on the driver navigator role of a pair and, with the navigator gathering the audiences opinions, direction, questions and feedback we will all drive the story forward.

Our goal is to take a hands on session like this and to capture and expand on the ‘ahha’ moments that come from driving with TDD. A lot of TDD and other techniques, processes and methods are very subtle and work together as a whole. Taking just one of these in isolation often fails to bring real benefits. We hope that by learning with the audience in this way that we can explore how these elements compliment each other.

Given that one of us is called Ronnie Barker I would also hope that we can inject a little humour into the session as well – even if it is a ‘And it’s good night from him’ at the end 🙂

Posted by: Stephen Oakman | September 9, 2009

Exec and sp_executesql

I’m working on a project that has a custom data access layer and lots and lots of stored procedures. One of my tasks right now is to look at optimising some of the more troublesome procs. Typically many of these generate dynamic sql.

So I thought (as did others) that one of the main culprits of the performance problems is down to straight execute of the generated sql statement as opposed to using sp_executesql. Using execute means that every single variation of a statement (the ‘parameter’ values themselves) would generate a new execution plan rather than using an existing cached one. Using sp_executesql means that the sql statement itself uses a cached execution plan and then the values themselves are passed in as named parameters.

So I set about modifying the proc to use sp_executesql. This involved building up a list of the parameters and their data types to pass in, as well as passing in the parameters themselves. It wasn’t as easy as I originally imagined but was fun (in a perverse way).

To validate the change I wrote a unit test which made numerous calls to the proc passing in different parameter values. In this way I could get some timings of the proc before I made the switch and then timings for afterwards as well.

But when I ran the tests to gather the timings the sp_executesql tests run SLOWER!! Usually not much slower but sometimes it is. This is against a shared dev database so part of the timings vary depending on what others are doing to the database but even so, over many of the runs I made the original execute version of the proc ran faster as the new sp_executesql version.

I have managed to speed up the proc though – but through the use of an index on two columns – although I still need to test this – I’m just surprised at how my initial assumption (or belief) in sp_executesql over execute has been dashed right now.

I’m pretty sure sp_executesql is still the way to go for various other reasons – but would ultimately like to use NHibernate and get rid of 95% of the existing procs but that’s not really an option right now.

Older Posts »

Categories