Products, the Universe and Everything

The Riverblade Developer's Blog

Beth demonstrating Visual Lint at the ACCU Conference 2008  Anna taking part in a discussion panel at the European Software Conference 2007 

Welcome to our developer's blog. We hope that this forum provides an insight into us, our products and how we develop them. Please feel free to write to us if you have anything to add to any of the posts here.

Current Entries | Archives |


Visual Studio "Orcas" Beta 1
Tuesday, April 24, 2007

We thought we'd be ahead of the game for once... :->

When Visual Studio "Orcas" Beta 1 was released to MSDN subscribers a few days ago we downloaded a copy and installed it on one of our development boxes (a three year old XP system which already has VC6, eVC4, VS2002, VS2003 and VS2005 installed...so it won't be lonely) to take a look and see what we need to do to Visual Lint to support it.

I have to say that for a first Beta the stability of this version is remarkably good - it's way ahead of where VS2005 was at Beta 2 (let alone Beta 1, which was absolutely ghastly). It seems that the change in process some of the MS bloggers have been talking about (and the monthly CTP builds, of course) have paid off. That can only be a good thing for those of us who actually have to use the thing.

We're focused on the extensibility interfaces of course - I've not even opened the WPF Designer or looked at what's happened to the frameworks - but what we see so far is sufficiently good for me to install it directly onto my laptop. That's happening right now.

At a first glance it does not look like we will have to do much - update the add-in registration to support HKLM\Software\Microsoft\VisualStudio\9.0, integrate yet another version of VCProjectEngineLibrary (the Visual C++ automation interface library) and fix whatever oddities we encounter along the way.

So far, the only nasties we've found are that creating nested menus doesn't appear to work and the IDE constantly sources AfterExecute events for the "Active Configuration" drop list even when the configuration hasn't changed. Minor niggles aside, we should have an internal build of Visual Lint which works with it within a few days.

Postscript: This capability was publicly released with Visual Lint 1.5.5.69 on 12th May.

Posted by Anna at 09:15 | Get Link

 

Back to Reality
Monday, April 16, 2007

We're now back in Bournemouth, for the first time since Monday.

We've had a great time at the Conference this week. We've learnt a great deal, met some amazing (and very entertaining in some cases!) people, made some very useful contacts and given away quite a few Visual Lint licences.

We'll definitely be back next year.

Oh and of course - it's now past Easter, and the Grockles have descended. In swarms...

The Grockles are back...

Posted by Anna at 15:35 | Get Link

 

A Qt way to eat breakfast
Sunday, April 15, 2007

Feeling Qt at breakfast


At breakfast time yesterday morning we were sitting in a conference room listening to a seminar by Trolltech - the people behind the Qt cross platform C++ framework.

Although we use WTL for our current projects, we are always looking to learn new techniques - and of course the lack of cross platform support is the big achilles heel of frameworks such as WTL (and of course MFC). It was an interesting presentation, and although we don't have any direct application for it at the moment, it is certainly something we will bear in mind for the future.

The first session of the day was "Towards a Memory Model for C++" with Hans-J. Boehm from HP Labs.

Multithreading is increasingly important in modern software systems, and the difficulties of writing multithreading support are well known. Java and C# define threads as a core part of the language, but even there getting the semantics of threading right is exceptionally difficult. C++ (as ever) presents a different set of problems and considerations, and traditionally C++ has addressed this using libraries.

Hans discussed why this is not adequate, using Pthreads and the Win32 threading model as illustrations. The impact of these failings is clear - multithreaded development in C++ is harder (and more error prone) than it needs to be.

As a result of this efforts are underway to ensure that the C++0x standard will define a memory model describing visibility of memory accesses to other threads. Herb Sutter is leading a similar effort for Microsoft platforms, and the intention is that the outcomes of the two efforts will be compatible. That can only be good news for C++ developers.

The detail of the proposals was beyond the scope of the session, but one obvious point is that a standard threading API will be provided as part of the C++0x language. The full proposal for this area of the standard can be read on the web at:

http://www.hpl.hp.com/personal/Hans_Boehm/c++mm.

After a short break we both headed to the Bladon suite for "Better Bug Hunting" with Roger Orr.

The session started with an introduction discussing the high cost of bugs and the wide variation in the effectiveness of individual developers at finding bugs quickly and not introducing unnecessary bugs in fixing them.

Bugs come in many forms, ranging from inconsistant or nonstandard UI and badly specified feature, to poor performance or instability. These are however, only symptoms - the root causes are often more subtle defects or flaws.

A simple approach to bug hunting might include:

  • Understand the system and how it works

  • Reproduce the failure. Unit testing can help a great deal here

  • Identify where the problem really is - not just where the symptom is

  • Change one thing at a time

  • Keep an audit trail (so you know what you changed and why)

  • Canvass views from others on the defect and possible fix

  • If you didn't fix it, it ain't fixed. Don't be tempted to "hide it under the rug"
More typically, it goes something like this:

  • Hope it goes away

  • Blame someone else

  • Open a debugger and poke around in the vague hope of finding something

  • Try random changes to see if the bug goes away

  • Fix the symptoms, but ignore the underlying defect.
Obviously, being able to reproduce the bug is absolutely key, and often developers have very limited information to go on. This increases costs and makes it less likely that the developer will correctly identify the root cause. Communicating to testers and end users what information is required when a defect is identified is essential.

An obvious improvement is to write scripts or code to collect the supporting information we need in order to triage the bug. Included in this category are of course the ubiquitous crash dumps. Logfiles also have their place - but only if the logs are easy to find and their contents are comprehensible. A bad log message is worse than none at all.

Any defect may relate to others which are already documented. As a result it is always worth looking for patterns in the defect tracking database.

Even when you think you've identified the cause, it is worth taking a step back. Could there be another possible cause? Is your fix really the right one or do you need to look a bit deeper?

Some classes of defects are of course much harder than others:

  • You can't reproduce the defect

  • The defect affects widely separated blocks of code

  • Memory corruption

  • Timing related

  • Environment related (e.g. permissions)

  • It was not the fault you thought it was.
Some techniques which can be used to increase effectiveness include:

  • Adding tracing

  • Refactoring areas of the code where you think the defect may lie

  • Running the system under a virtual machine in a configuration which is representative of that on which the bug was previously seen

  • Deliberately stressin the code to see if the failures match the observed symptoms

  • Use static or dynamic analysis.

  • When the cause of a bug is identified, it makes sense to take a step back to determine how to more easily identify and fix bugs of this class.
After a brief and spartan lunch (all the hotel provided were a few sandwiches) Beth and I headed back to our room to rest for a while (and hence for hot chocolate on the terrace!). Conferences really are exhausting!

Before heading in for the final session we had a chat with Julie Archer and Ewen (the conference chair) about sponsorship opportunities for next year's conference. The options they gave us were exactil what we were looking for, so I would hope we will have something in place for next years conference. Watch this space...

The final session of the conference was "C/C++ Programmers and Truthiness" with Dan Saks, an entertaining look at how both the general public and developers (who really should know better) will give "gut instinct" precedence over contrary evidence.

Dan's major example was the placement of const in variable declarations (i.e. whether you should use const T * or T const *) a cause he's been fighting for some years. It got interesting when Herb Sutter and Bjarne Stroustrup both got involved in the discussion...

Personally, I think we've got bigger battles to fight. There are far too many developers out there who don't even use const, and with the trickle of people from languages without const (for example C#) into ocasasional C++ coding I keep running across this is not likely to get better in the forseeable future.

Posted by Anna at 07:40 | Get Link

 

Forgive Me Father, for I Have Singleton'ed
Saturday, April 14, 2007

(or: When Patterns meet Anti-Patterns, do they Annihilate?)

We've had a late start this morning, skipping the opening session because we needed a break after the marathon yesterday (I crashed out with a headache at 6pm yesterday, only waking up to go to the bar to eat at 8pm).

This morning we met up with a couple of other developers at breakfast and had a lively discussion about multithreading, the ideosyncrasies of development tools and network compilation techniques (as seen in Incredibuild). We found the discussion very useful - in particular it helped us to focus some ideas on future versions of Visual Lint - and in particular on the potential for integration with build and continuous integration systems. That's for the future though, and I'll write about that in due course when we've developed our ideas a little further.

The first session this morning was "Choose your poison: Exceptions or Error Codes?" with Andrei Alexandrescu. Coming from a Win32 background as we do, I suspect we tend to use the latter a little too much, so we saw this as an opportunity to learn an alternative viewpoint.

Andrei's delivery was humorous and entertaining. One early point was that in many cases it can actually makes sense to implement both schemes in a library. Utimately, the consumer is best placed to evaluate which scheme is appropriate for a given situation.

Another way is to categorise into soft errors and hard errors. The former can safely be ignored; the latte cannot. Another consideration is the state of the system - if the system is left in an undefined state, an exception is almost certainly appropriate.

The old C standard library function atoi() was used as an example of a library function designed without consideration of error conditions - neither has an error code return nor throws an exception on failure, instead returning 0 (arguably the most common return value!) on error.

Andrei presented four solutions to return error information:

  • Set a global state (the "errno" approach). This approach (also used in Win32 as GetLastError()) is easy but has big issues with threading and makes error handling far too optional for many tastes

  • Encode the error code as a reserved return value. A reserved value - conveys very little information, and cannot support centralised handling. It also requires error values to be reserved in the normal return code, which is not possible for atoi(), for example

  • Encode the error information as a value of a distinct type (an error code). This approach (commonly used in Win32) works quite well but does not lend itself to centralised handling - it's success is entirely reliant on the caller checking the returned error code

  • Exceptions (effectively "covert return values"). Exceptions are good at supporting centralised error handling. Local handling is also possible, but more longwinded.
A key consideration is that only certain callers understand certain errors - the local caller may not, but a higher level method which ultimately invoked it may be better placed to understand the context of the error. The converse can often be true.

From a personal perspective, I have to say that debugging exceptions is not as easy as it could be, and furthermore if you use exceptions you must ensure that declared exceptions are caught somewhere (far too many developers don't, sadly).

The top three issues with exceptions discussed were:

  • Metastable states - the user must ensure transactional semantics, or thesystem can be left in an undefined state.

  • Local handling is unduly verbose.

  • They are hard to analyse and debug.
Andrei then presented an intriguing proposal - an alternative approach which combines some of the better characteristics of both. I won't go into it further other than to say it was an intriguing proposal involving a template type called Likely<T>.

During the lunch break the Perforce guys were hosting a sponsor session called "Branching and Mergine without a Safety Net". We went along out of interest (SourceSafe really is showing its age now, and we're always on the lookout for new tools) and I'm very glad we did. The delegate's pack (a rather useful ACCU 2007 bag containing lots of flyers and a rather fat A4 pad) also includes a CD with a fully functional 2-user copy of Perforce, so I dare say we'll try it out when we get the chance.

After lunch the world and his dog squeezed into a far too small room for what turned out to be a highly entertaining session by Kevlin Henney entitled "Pattern Connections" (incidentally, the title of this post is a quote from the session if you didn't realise it). I can't even begin to do it justice here. Suffice it to say that if you get a chance to see him present - just go! Kevlin is apparently a brilliantly entertaining speaker at the best of times, and today he was definitely "in form".

After a brief chat with the Perforce guys Beth headed back to the room to rest, while I headed for the session "Test Driven Development with C# and NUnit", which was actually an extract from Learning Tree 511 course (".NET Best Practices". We had a very small contingent for this session, but at least that meant we could feel the air conditioning for a change!

Encouragingly, I found that the vast majority of the content of the session covered topics and practices I was already familiar with. As a result, I didn't learn much, but it was a useful confirmation that we are heading in the right direction with TDD. Given that we only started using it back in December, that is extremely encouraging.

During the break we congregated at the Perforce stand for a brief hands on demo of their product; effectively a follow up from their sponsor presentation earlier. Suffice it to say that we were very impressed with what we saw, and we are quite likely to "jump ship" from VSS in the reasonably near future as a result.

A geek gimmick, courtesy of Perforce


After the break was "Grumpy Old Programmers - The Ultimate IT Chat" or (more accurately) "We Want Beer!". This was a freeform, irreverent and very funny discussion, frequently interrupted by calls of "More beer!" from the panel.

Posted by Anna at 09:20 | Get Link

 

What do you mean, it's morning?
Friday, April 13, 2007

More Beer?


The theme of "more beer" at last nights final session was a predictive one. Afterwards everybody gradually congregated in the hotel bar to socialise. As tends to happen, a consensus on where to go next gradually arose, as a result of which by 8pm a bunch of us were piling into taxis for the trip into town.

The initial plan was to eat at the Randolph), but as ever things changed at the last minute and we found ourself in the Eagle and Child waiting for the remainder of our contingent, where good food, much beer (and the occasional red wine) flowed amid an equal volume of hilarity. I was wearing my "Life is Simple" CAMRA T-shirt so I was definitely dressed for the occasion!

Suffice it to say that this morning we were a bit slow.

The first session this morning was "This Software Stuff" with Pete Goodliffe (the author of "Code Craft") - a lighthearted but incisive look at what developers who care about their trade should be doing.

Appropriately, this session was anything but serious, which was I think exactly what we needed after the fun of last night. If you get a chance to hear Pete speak, I'd highly recommend going. Just remember to ask him about the fizzy milk and alphabetti custard...

After the interval Beth disappeared into the dark, dark lands of a template metaprogramming session, and I headed for the Cherwell Suite for "Global - Yet Agile - Software Development" with Jutta Eckstein.

The session discussed techniques for running Agile processes in large and often geographically dispersed software teams. Obviously, these have major implications for communication - a keystone of agile methodologies.

One key point that emerged was that the natural tendency to structure a large team around functional blocks is one which runs counter to the agile aim of keeping the system working at all times and delivering each feature complete throughout the system at the end of each iteration or sprint. The project is of course far more likely to succeed if a team or subteam is given complete responsibility for a feature all of the way from requirement to acceptance.

The session discussed various considerations and techniques for overcoming the many hurdles which a distributed agile team faces. Communication and synchronisation is key; how you do it (IM, phone, videoconferencing etc.) isn't particularly important - but it must happen - and ultimately, there is no substitute for face to face contact. It was an interesting session, and although I've not worked in such an environment I think I can visualise the issues clearly.

Over lunch there was a Visual C++ session with Steve Teixeira, the Group Programme Manager for Visual C++.

The session started with an informal poll, which illustrated how many people in the room are using Visual C++, how few people are using managed code. and the reasons (versioning, distribution etc.) why they are using native code rather than managed.

Steve stated that the first priority of the Visual C++ team was native code, followed by enabling interop to allow use of newer technologies from Visual C++. This is illustrated by the roadmap for Visual C++ Orcas and beyond:

  • Renewed investment in native libraries (MFC/ATL)

  • Making it easier to interoperate between platform paradigms

  • Innovation in areas such as concurrency etc.
Interestingly, Steve admitted MFC and ATL have been neglected by Microsoft since managed code emerged. That is now recognised within Microsoft as a mistake and is changing. As a result we can expect to see significant new functionality in the native frameworks (e.g. WPF support in MFC) in future Visual C++ releases.

We can also expect significant IDE improvements in future. C++ IDE support is currently lagging behind those for managed code at the moment, and (interestingly enough) the Visual C++ team see managed IDEs as their "productivity competitors" rather than the functionality offered by add-ins such as Visual Assist.

The new Visual C++ features in Orcas include:

  • MFC support for new Vista common controls (sysLink, IPv6 compatible network address control, split/drop button and command link

  • Vista UAC support in IDE and projects. Interestingly, the registration of ATL components is now by default in HKEY_CURRENT_USER rather than HKEY_LOCAL_MACHINE

  • New Vista SDK and APIs

  • STL/CLR - an STL which can be used from managed code, and allows STL interfaces to be used to work with managed collections

  • A marshalling library to make marshalling data between native and managed types

  • Metadata based incremental managed builds and concurrent module compilation (improved dependency checking)

  • .NET framework multi-targetting (i.e Orcas can target both .NET 2.0 and 3.5)

  • The C++ class designer is back! Unfortunately, in Orcas it is read only - you can't edit class diagrams for C++ projects in this version, only view them.

  • ATL Server is now shared source on CodePlex (it is no longer Microsoft proprietary)

  • The removal of Win9x targetting. Orcas built projects can target Win2k and above only.
Sadly, Orcas will not include MFC support for the TaskDialog() API. To me, this is very disappointing given that we really should not be using MessageBox() in new projects (WTL 8 already has it, incidentally).

The session finished at almost exactly 2pm so I rushed upstairs to Peter Hammond's 45 minute "Open Architecture vs Open Source in defence systems" session, which I thought might be interesting given my past experience with Racal.

The core theme of the session was that in most cases the reality is that there is very little open source software out there which has sufficient support and active development to be used in a defence environment. Projects such as MySQL really are the exception rather than the rule. As ever, there is no magic bullet.

In the evening those of us who weren't booked in for the Speakers Dinner (a tad expensive at £50 per head, we thought) wandered off in different directions to eat/drink/be merry. Beth and I walked into Wolverton with a couple of the guys for a gorgeous meal at the Trout. Yumm.

Posted by Anna at 08:02 | Get Link

 

Tar'ed and Feathered
Thursday, April 12, 2007

This morning we decided to try out the hotel pool (and spa - they have a steam room and sauna too) before breakfast. It was an excellent way to start the day, but (I suspect) one we won't repeat as often as we probably shouldthis week.

As a result we had a late breakfast, finishing with just 10 minutes to spare before the start of the first session - the keynote "The Software Development Pendulum" with Mary Poppendiek. The conference hall was packed, and we managed to find a couple of seats near the front.

Mary presented an interesting history of software development - and in particular some of the early failures. A recurring theme was that the vision for larger systems has consistently been at or beyond the limits of the hardware and software capability of the time.

One thing which came out very strongly was the contrast between large systems, smaller systems and software products - the latter are not prone to failure in the same way, and indeed my own experience bears this out - none of the projects I've worked on have failed, and all have been delivered.

At the end of the session Mary touched on Lean Software Development, which is something I feel we should explore further.

During the break (held in the sponsors lounge this time) we bumped into Andy Brice from the Business of Software forum. We last met him at the ESWC last year.

Next up was "Coaching Software Development Teams" with Michael Feathers. This was a session I've been looking forward to. Michael is the author of "Working Effectively with Legacy Code", a book which I've found phenomenally useful in bringing Visual Lint under unit test and using TDD techniques to develop the product further.

His topic for this talk was how to coach teams to to use agile and quality centric techniques to help teams improve their processes and thus the quality and fitness of their products.

Invariably, a central topic of this talk was human behaviour, and how it can affect team dynamics. The values and motivations of team members are something a coach must recognise in order to achieve their aims. Changing the values of an organisation in a significant way is extremely difficult (and therefore likely to fail), but building on the values of a team and/or its organisation can be very successful.

It was very notable to Beth and I that the approach presented was analytical rather than human centric. When a delegate asked how to formally evaluate the values of the team I couldn't help thinking that something fundamental was being missed. To understand a team, you have to know them - and to do that you need to meet them both in and out of their everyday environment - in other words get to know them as people as well as professionally.

I couldn't help thinking that much of the techniques being presented were ones I've been using for some time, but it was useful to see it disseminated in a structured setting, nevertheless.

After a relaxed lunch and chat out on the grass we headed back in for the first of the afternoon sessions - "Reviewing the C++ Toolbox: Identifying tools that support Agile development in C++" with Alan Griffiths.

During the introduction he identified the wide variation of tools in C++, with no consensus between groups (or even teams!) as to which tools are best suited for each role.

This was an interactive session - the idea being that delegates would share their experiences of tools in particular domains and suggest alternatives:

  • Build Systems/Source control/Continuous integration

  • Test frameworks

  • Refactoring

  • Code Documentation

  • Modelling + round trip

  • Editors/IDEs

  • Code Analysis

  • Debugging

  • Instrumentation/Coverage/Profiling/Performance Analysis

We broke into 6 groups to discuss various areas from the list above. I must have stuck my hand up once too often, because I ended up leading (with another developer who worked on QA C++ until recently) the group discussing code analysis. It's the first time I've presented in front of a group of peers for some time, and to be honest I was not too sure how well I came across at the time- I felt quite nervous, but it seemed to go down well (at least, Beth tells me I did!).

Next up was "Linting Software Architectures" with Bernhard Merkle. We've been talking with Bernhard by email over the last couple of weeks, so this is a session we've been looking forward to. The focus of this talk was architectural analysis, and in particular at tools which can be used to automate such analysis.

The need for such tools is of course a result of the phenomenon of "architectural decay" - a problem which should be familiar to any developer who has had to maintain working software systems.

Architectural Analysis works on layers, graphs, subsystems, components, interfaces etc., and assesses metrics such as coupling, dependency etc. as well as things like consistency analysis and detection of anti-patterns.

Consistency Analysis is based on the premise of comparing the codebase with a "gold standard" model. Key to this sort of analysis is of course how results are presented (to be honest, the same considerations apply in code analysis).

A further analysis type described was Rating of Architecture, which assesses characteristics such as cycles, coupling, stability and the presence (or hopefully absence) of anti-patterns.

Example tools presented include:

www.software-tomography.de
www.axivion.com
http://www.hello2omorrow.de
http://www.headwaysoftware.com
http://www.lattix.com
http://www.xradar.org

Of these, Sotograph (C++/C#/Java) and Structure 101 (Java/C++/Ada) look particularly interesting.

Bernhard recommended the book "Refactoring in Large Software Projects". We'll certainly take a look at it.

Posted by Anna at 07:21 | Get Link

 

Getting Agile
Wednesday, April 11, 2007

Yesterday was the pre-conference workshop day. We'd pre-booked for the agile development workshop with Kevlin Henney, and it turned out to be a good choice. After the (very entertaining) initial presentation we organised into teams of four for the workshop proper.

The first hurdle we ran into was that one of the two laptops in our team was a Linux box, and could not read the USB key we were using to transfer code Between the two teams. We lost a fair bit of time to that, eventually resolving it by the loan of a conference laptop.

When we finally got started on the first sprint (iteration) we quickly broke the team into two pairs, each with responsibility for a different part of the project. Everything seemed to be going well (unit tests were being written and used to prove each part of new functionaty) until we realised we had misinterpreted part of the requirements and as a result the interface wasn't going to work. A rapid re-architecting on the core followed, which moved additional functionality into the remit of the other sub-team at the end of sprint 1.

Sprint 2 seemed to be going well, but towards the end we realised we had a bigger problem than we thought. Re-reading the requirements led to a realisation that we'd misinterpreted them, and needed to do something drastic.

Sprint 3 (the final sprint, at the end of which we had to report) was mainly about getting back on track. The good thing was the way the team pulled together - at this point we ditched the pair programming and worked on one laptop as a group of 4.

With this additional focus, we were able to start hitting our requirements, and although we weren't as far along as we wanted to be by the end we were starting to hit our requirements, and were able to report reasonable metrics (no. of requirements met, in progress and number of passing unit tests) at the end of the session. It was a useful and interesting example experience.

Afterwards we chilled for a while before heading up to the bar to eat. While a large contingent headed off to a local restaurant we stayed behind to socialise and get to know people. Beth went back to the room to rest for a while, while I stayed around to chat.

Before I crashed I did however fix a bug and get the corresponding unit test passing. I always was a night person.

Posted by Anna at 07:35 | Get Link

 

Sangria, Disarrono and Dismembered Laptops
Tuesday, April 10, 2007

Another laptop bites the dust...


I'm writing this from the breakfast bar at the Paramount Oxford hotel. With the conference starting in less than an hour we're killing time.

The drive here from Bournemouth was easy, although we lost the sun somewhere along the way. The hotel was easy to find - just off the A44 which comes into the city from the Northwest.

We did have a bit of drama yesterday, though. With impeccable timing, Beth's laptop (a compact but battered 3 year old Vaio) decided to die just before we set off. Talk about annoying!

After we'd unpacked (the room now looks well and truly "lived in" ) , we went exploring around the hotel and quickly found the gym and spa. A quick change of clothes and an induction later and we were off and running (or in my case, cycling). A 45 minute workout was just what we needed after the drive from the south coast.

The menu for the hotel restaurant looked a little heavy for our tastes, so after a quick shower we decided to head into town to eat. We thought it was around a mile, so about 15 minutes walk. We were however wrong. After 25 minutes we stopped to check our map and discovered it was more like 2 1/2 miles!

We eventually got there of course, and after exploring for a little while we settled on a small Tapas Bar called La Plaza on Little Clarendon St. The food was great, the white wine a perfect accompaniment and by the time the sangria arrived we were we'll into the flow. The disarrono we finished the night with was a perfect way to end the day and guaranteed that we'd eat a good breakfast the following morning.

Of course, we caught a cab back....

Posted by Anna at 08:19 | Get Link

 

On our way to the ACCU Conference
Sunday, April 08, 2007

We're just about to head off to Oxford for the ACCU Conference.

As I'm not entirely sure what internet connectivity we're going to be able to find when we get there, please bear with us if we're a little slower responding to emails than usual. If all else fails, I guess there's the £17 per day option for wi-fi in the hotel, but we'll obviously try to find something more cost-effective if we can!

Posted by Anna at 21:54 | Get Link