Skip to main content

Posts

Showing posts from September, 2009

Mocks, Fakes, Stubs - why bother?

Ever wondered why there are so many different names for the objects that mimic behaviour of the 'real' objects in a system - mocks, stubs, fakes, doubles... I can't help looking at the tables of definitions on this page and think why bother! Why have all these mocking frameworks gone and raised the bar of understanding for people who don't like TDD or who don't currently do TDD. To me everything is a mock if it's not the real thing - pure and simple. So when I write tests I call everything a 'Mock' so the tests are easy to read & understand by the anyone (including people adverse to TDD). Prehaps this is one of the reasons why I've stopped using mocking frameworks in general. I'm sure some people think the distinction between a stub and mock is important but it isn't, the test is important not what & how you mock. Awkward Coder

Distributed Systems are Coupled - Period!

If you're doing distributed systems development your systems will be coupled together - period! You can't get away from this statement it's a fact of life. Now how much you're coupled is another question. After the revelation I had last week that most REST systems aren't REST at all and are in fact just over-elaborated RPC (oh look we've reinvented CORBA again!) - link . I've come to the conclusion that REST systems aren't easy to implement and anyone who tells me otherwise doesn't know anything about distributed systems! If REST systems were as easy people would make you believe why are so many not classed as REST by Dr. Fielding ... Awkward Coder

Devlicio.us boys run out of duct tape!

trying to reply to a blog about duct tape programmers and guess what ;) Awkward Coder

Test Harnesses are counter productive...

How often do you hear: 'Why do I need to write tests when I've got a perfectly good test harness?' Now I hear this often and I'm not surprised anymore when I hear it, it's a sign of a dis-functional team where team members don't value the team they only value their output. I've highlighted the words that give it away: 'Why do I need to write tests when I've got a perfectly good test harness...' There is no 'I' in 'TEAM'! Anyone who insists test harnesses are just as good as automated tests is plain wrong. They're selfish developers who only care about the code they've written - and probably don't get involved with the team. The reason it's selfish is because they might well be able to test all edge cases with their test harness but how is anyone else meant to know how to achieve this. They can't unless they understand exactly how the test harness is constructed and to be used. It's more productive from a

So you thinking you're doing TDD?

I work freelance and like most freelancers I change job relatively frequently so therefore I do a lot of interviews. One thing I've noticed when being the interviewee is the amount companies lie! One of the common technical lies I hear is 'We use TDD, all code is under test and we run automated builds... ' I use to take this at face value - being a trusting fellow and not wanting to judge someone to quickly ;) So if I want to know how much truth is in the statement I could follow up by asking about mocking frameworks, BDD & Dan North etc... But the killer question for me is to ask about their usage of an IoC container. Now IoC containers have nothing to do directly with TDD but If you're doing TDD you'll be using Dependency Injection and therefore you'll at least considered using one when you've realised your classes start to have to many constructor arguments. So if they dismiss the usage of IoC without a good reason I know they aren't telling the

I know nothing moments...

I was researching RESTful APIs today, it's couple of months since I worked on a RESTful project and I'm thinking of doing a small project with a RESTful API. I discovered this link and found out that all my previous RESTful APIs aren't really RESTful ;) So after discussing this on Yahoo groups I feel like I know nothing about REST now :( Feeling stoopid now... Awkward Coder

How to test a static dependency used inside a class...

This is a question that keeps coming up and I know if you're practicing it's a no brainer but I keep getting asked this by devs (I'm no testing God!). The long answer is to read this book and pay attention when talking about ' inserting a seam '. The short answer is carry on reading... Now several people (read Jimmy Bogard ) have already answered this but here is my take on this looking at my current client, they have a lots of deeply nested static dependencies - these are implicit dependencies and what you really want to is explicit dependencies because they are easily testable. So I see a lot of classes like this nested deeply in some object graphs. public class Foo { private string _ url ; private string _ connectionString ; private string _user; public Foo() { _ url = System.Configuration.ConfigurationManager.AppSettings[" SomeUrl "]; _ connectionString = System.Configuration.Config

Application auditing - an example why I don't work at the weekend...

Ever had a situation where you're OLTP requirements are impeded by your OLAP implementation, well to put it another way - have you ever come across an auditing solution that causes transactions to timeout when you're trying to save data into your production database. Well the answer for me is far to often for my liking and this is an example of 'synchronous auditing' and I believe this is an anti-pattern in the making. I'm firmly in the camp that believes auditing should be done asynchronously by a different (application) process. The reasons why I think it's an anti-pattern is because, if how you audit affects the performance of your production database then your performance is going to degrade overtime, and if you insert 500,000 audit records a day that's going to occur relatively quickly. Now DBAs would say lets put a maintenance plan in place to clear down \manage the audit database, or even remove the synchronous auditing and perform a batch load o

Repository pattern - my preferred implementation...

Okay it's nothing new and not even original but I wouldn't to get down my currently preferred implementation of the repository pattern . I suppose this was prompted by a blog by Jimmy Bogard and Oren's statement a couple of months ago the repository pattern may be near then end of it's life. I still think in the .Net world they have great relevance as most .Net devs can't organise code for toffee and when you try and introduce layering into an application the use of an explicit repository layer is the first layer they seem to understand. So here is my current repository flavour - strawberry with a twist of lemon... public sealed class Repository<T1, T2> : IRepository<T1, T2> where T1 : IEntity<T2> { private readonly ISession _session; private readonly string _traceType; public Repository(IProvideSessions sessionFactory) { _traceType = string.Format("Repository<{0}, {1}>: ", typeof(T

Auditing with nHibernate...

Long time since I've posted anything but I came across an interesting problem the other day whilst I was working - ' How can I audit changes to specific entities when using nHibernate? ' There are several implementation out there already (see Oren's & others posts) but of ones I've seen they are to low level for my liking - they push the auditing of changes into the NH infrastructure away from the Service implementing the business behaviour. I want my service to control and define what is audited I don't want everything audited in the same manner. Auditing for me is the pushing out of events that have affected entities persisted in an OLTP database to OLAP database ideally in an asynchronous manner via some queuing mechanism (MSMQ). So how do you get events from NH? Now if you want to observe changes to entities in NH you have to register you're interest by passing an implementation of an IXXXListener interface to the NH configuration when create the S