Note: this topic is a work in progress, and will be expanded as I get time to write up my notes from the conference.
Once again we're in Oxford for the ACCU Conference - a week of almost solid learning, discourse, networking and of course socialising. This is the fourth year we have attended the Conference, and from experience we know that by the time it finishes on Saturday afternoon we will be absolutely exhausted! Nevertheless, the amount we learn here every year (and the excellent in-bar conversations at the end of each day) makes this conference more than worth it. 
As we arrived on Tuesday afternoon we had a bit of time to relax before things really got going (which they did in the evening, with seventeen of us descending on a restaurant in town and then monopolising the hotel bar afterwards...).
Day 1 - Wednesday
This morning after dropping off our gear in the exhibitors area we joined a friend for breakfast and then headed down to set up our stand (an activity which almost always holds a few unexpected last minute surprises).
The Evolution of Scrum (Jeff Sutherland - Keynote)
This was the first session of the conference proper, and as such the main conference room was packed to the gills. As we were delayed by setting up the stand we ended up right at the back of the room, so I couldn't see the slides particularly well, but fortunately that wasn't a problem for this session.
Jeff gave an interesting overview of the history of Scrum and its core principles, before doing the same with Lean Software Development and then going on to give examples of how Scrum and Lean have fared in environments where their success has been directly measured by comparison with traditional approaches (a notable example being a CMMI Level 5 company which was sufficiently impressed by the achievements of Scrum teams that they started bidding Scrum projects at 50% of the cost of the waterfall equivalent).
I've not worked on a Scrum team myself so I don't have an informed view of its advantages and disadvantages relative to other agile processes. It is however beyond doubt that it can be very effective, but that to be so the organisation must embrace not only the mechanisms of Scrum but also its core principles. That of course requires that the organisation must not only change itself, but also must welcome that change. Unfortunately experience tells us that far too many organisations struggle to do that.
Using Concurrency in the Real World (Anthony Williams)
Anthony is the author of the Boost threading library, and an authority on the new threading capabilities of C++ 0x.
In this session he talked about the (by now well understood) difficulties with designing multithreaded software before going on to describe core principles (e.g. carefully considering which data must be shared and avoiding hidden dependencies, singletons and globals) to apply in order to write simpler, more effective concurrent applications.
Among the simple but interesting techniques Anthony described were the use of a SynchronizedValue template class to wrap a value type with its own lock (reassuringly, this is a technique we already use extensively in our own code, albeit under a different name) and a templated DataFlowVariable type which executes operations in terms of tasks and can be used to produce scalable codebases very easily. The latter in particular is something I think we could learn from when we next review the way we use tasks in our codebases.
He then went on to describe how new C++ 0x threading features such as futures (which provide a simple way of executing a task and waiting for a result to become available) and std::async (which automatically scales the number of threads spawned to the capabilities of the hardware) can also assist.
Googletest and Googlemock C++ libraries (Seb Rose)
This session was (like the previous one) packed out. A notable late arrival who didn't get a seat was Kevlin Henney - which is almost karma, given how packed his sessions at the Conference usually are!
Googletest is an xUnit unit test framework developed to support the development of the Chrome browser. Like most C++ unit test frameworks, Googlemock is macro based and has only a console runner, which can be configured via command line options. One nice touch is that the pass/fail output is colour-coded.
Seb demonstrated running test fixtures within Visual Studio,with test results directed to the output window. In this environment double clicking on a failure message in the Output Window opens the source at the failing test as you would expect.
One quirk is that exceptions must be explicitly tested for, whereas most xUnit test frameworks will automatically fail the tests if an unexpected exception occurs. Under Windows, the option -gtest_catch_exceptions overcomes this limitation.
The mocking framework Googlemock is a templated framework based on jMock and using tr1::tuple . Interestingly it does not require Googletest, but should work with any test framework. Because of its template nature, it does however impose a significant compilation time penalty.
Quite reasonably, it can only mock virtual methods. It does however look quite flexible - allowing (for example) a test to define sequences of expected calls and parameters into a mock, and in what order.
The C++ 0x Standard Library (Nicolai Josuttus)
This was a detailed in study of the new features in the C++ 0x Standard Library, including all of the things you would expect - initialiser lists, new containers, auto variables, tuples, r-value references, move constructors etc.
The new version of the C++ standard introduces so many changes that even a detailed study can only just scratch the surface, and there is certainly too much for me to even begin to do justice to it in a mere blog post. If you are planning to move your codebase to C++ 0x in even the medium term I can only suggest that it would be a good idea to start familiarising yourself with the changes (and their implications and the resultant opportunities) as soon as you can.
Sponsors Reception
Following the session there was an hour long break before the Sponsor's Reception at 6:30pm, which is as ever our chance to showcase our products, answer enquiries and and make some new contacts over a glass or two of wine. This year we have quite a bit of new stuff to showcase so we have a lot to talk about. 
Thence to the bar, of course....
Day 2 - Thursday
After the reception yesterday afternoon we packed down the equipment from our stand and congregated with other delegates in the bar for discourse (and of course, beer). In due course, a trip to Chutney's Indian Restaurant in town was proposed, taxis were booked and we continued our discourse over a curry and (of course) more beer.
After we returned those unfortunate enough to wander into the hotel bar and stay into the early hours of the morning were of course fated to join the ranks of "The Lakosed" and hence were distinctly wobbly this morning.
Hello! I'll Be Your Tester Today (James Bach - Keynote)
I have a feeling that this when we come to the end of this year's ACCU Conference a significant proportion of delegates will have this pinned down as their most memorable session. Certainly, this was the funniest one I've been for a while.
James' started out with a simple question, and an equally simple (but brutally honest) answer:
Question: What do testers do that's special and different from developers?
Answer: We break stuff (or more accurately: you broke it when you wrote it, and we just found you out!)
He then went on the give a very humorous description of the role and mindset of a consultant tester, using (among other things the dropped calculator example). His delivery was infectiously funny and as a result we spent most of the time laughing. His captioned re-interpretation of The Towering Inferno (reproduced below) gives a small insight into how he sees the tester's craft:
One interesting observation was that with the advent of agile software development, development teams effectively got so fed up with the process people (who are the same people that testers have been fighting for years) that they fired their testers.
Automated functional testing is one of James' big beefs with the software industry. He eloquently demonstrated that human based testing offers something that an automated test just cannot - for example intuition, and thinking "outside the box". Automated systems just can't do that - which is why they excel at unit level testing, and fail miserably at end-user testing. Most importantly, they cannot offer communication and feedback to developers in the same way as human testers.
He then went on a very funny excursion through some of the techniques testers may use, notably the "Click Frenzy", the "Shoe Test" (any test consistent with pounding on a keyboard with a shoe!), and "Branching and Backtracking".
One important point made was that testing is a heuristic activity encompassing a teachable set of skills. It's not just "playing around". Untrained testers immediately start writing test cases, based on assumptions they aren't aware of and don't even know how to think through.
Quote of the session "I'm testing outside the specification because the specification is a rumour."
Genemodulabstraxibilicity (Steve Love)
"The feeling that it's just too difficult"
This session was a discussion of code and design smells, and the problems they can lead to - all ably illustrated by quotes from Edserger Dijkstra, Terry GIlliam et all.
Before starting that discussion, however, Steve observed that when we code, we strive for an often conflicting set of goals (maintainability, etc etc) and that in doing so we often sacrifice simplicity. If however we instead strive for simplicity the rest will generally follow.
The classical corporate response to such smells is of course to lay down a rigid set of rules which proscribe certain constructs or structures. Unfortunately, this all too often becomes a dogma in its own right, with the original meaning being lost.
For example, one C++ coding standard I encountered early in my own career contained the edict "Do not use multiple inheritance" (on the grounds that it was a powerful technique which was likely to be cause maintenance headaches for inexperienced developers) which today seems quite ludicrous. I wonder if that rule has been removed yet, or whether the company in question is still insisting that developers blindly follow it, despite having long forgotten why the rule was written in the first place...?
Worse, the sheer number of design and code smells (of which Steve could only mention a limited number, even in a 90 minute session) makes such a proscriptive approach even more unrealistic.
So, rather than issue such proclamations, would it not be better to encourage developers to actually think for themselves and make their own informed judgements instead? We are, after all, supposed to be software professionals.
Stood at the bottom of a Mountain looking up (Pete Goodliffe)
This session dealt with learning and learning curves. The "mountain" in the session title refers to the mass of "stuff" we know we don't know when we join a new project.
As software development is a knowledge profession, it requires that we constantly learn. However, learning (or more accurately being aware of how little we actually know about an as yet unfamiliar subject) can be frightening. Learning is difficult, and becomes harder as systems and technologies become more and more complex. It's a tough problem, but one we have to face if we are to succeed in our profession. At the end of the day it is our own responsibility to keep learning and evolve our understanding and knowledge.
Moral of this session: Always question what you're learning, and why. Beware of the obvious, and continually question your preconceptions and prejudices.
TDD at the System Scale (Steve Freeman & Nat Price)
Steve Freeman and Nat Price are the authors of Object Oriented Software Guided By Tests, and this session promised to be an interesting one.
In most projects, developers who practice test driven development do so on small functional blocks (classes, functions etc.) and then try to glue them together to form a meaningful system. By contrast, this session described the presenters' experience in applying TDD principles to a financial system at system level (i.e. a "system-test first" test driven development). That's a very ambitious goal.
One way to do this is to interface directly to a domain model (which by definition does not include the external interfaces) within the application. As such, it requires that the application be structured in such a way as to make such testing practical - i.e. that the high level design of the application - rather than just individual components of it - be amenable to automated test. Hence a system level TDD approach may also require system level decisions to be made at an earlier stage than would be normal in a traditional TDD project. As such, this can be a scary concept to some organisations since it forces them to make and commit to such decisions early.
In addition to the obvious design constraints, there are some real practical difficulties (asynchronous behaviour being an obvious one) to overcome in doing this sort of testing. Nevertheless, they seem to have managed it.
I must admit I did like their concept of using audit events as an alternative to logging.
|