Sunday 2 November 2008

RestFixture latest additions: JSON support and Sequence Diagrams Generation

RestFixture has been enhanced with two new features:

  • basic support for JSON;

  • runtime generation of sequence diagrams showing the flow of http calls within a table


First of all, thanks to AndyO as he is the author of the JSON support enhancements. Now RestFixture allows testers to write and verify expectations for content returned in JSON.

As for the second feature, the generation of sequence diagrams from a RestFixture table, the idea has been growing since i found UMLGraph, a nice tool that generates class and sequence diagrams. The idea is that for describing complex REST API interactions... a picture is worth a thousand words.

Here follows two simple diagrams and associated tables to demonstrate the generation of the diagrams.
Note that the diagrams are generated at execution time, therefore they reflect the actual outcome of the run of a RestFixture test. This means that if the test fails the picture shows the flow of messages that failed the test.


The diagrams are pretty basic and they only show the objects (or resources) involved, the verb, the return code and, for POST the id, taken from the Location header.

Full instructions on how to install the required software and how to use the new feature are available in the RestFixture web site.

Support for generation of the diagrams is basic. It has been extracted out of the main RestFixture trunk as it's still in spike mode. The code, though is checked in the same SVN trunk (the project is called PicBuilder).

As always comments are very welcome!!

Sunday 26 October 2008

Getting immediate and accurate feedback from the IDE

In Agile methodologies getting feedback to adjust future work is a valuable practice to follow. In reality this is nothing new, in any human activity information gathered from feedback received on the system in development is used to drive activities to improve the chances to get the desired output.

Agile methodologies typically address feedback from the customer perspective: the team produces the agreed artifacts, the customer provides feedback on the outcome, both together re-plan future activities to adjust appropriately to get maximum business value from new delivered artifacts.

Feedback is characterized by two properties: its “immediacy” and its “accuracy”. Immediacy is the perceived time elapsed between when feedback gathering starts and when it's actually obtained. Accuracy is related to the usefulness of the information extracted from feedback.

Bearing in mind these two properties, feedback can be extended to development activities by appropriately configuring development tools and refine development practices. The aim is to get feedback “now” (immediate) and “spot on” (accurate).

The source of ultimate feedback for a developer is the build process. Typically, though, “the build” is a resource intensive process that happens on a continuous build/integration server (for example Cruise Control). This process consists mainly of a sequence of steps that include checkout from source control repository, compilation of the code, execution of the unit tests, metrics gathering, deployment in some test environment, execution of the customer acceptance tests, documentation generation.
These steps are executed using the entire source tree and provide the ultimate feedback on whether the latest code changes have broken or not any of the existing functionalities. Clearly the larger the code base is the longer the build takes. In a short “write test – write code – check-in” loop (typical of a TDD approach) though it's usually unacceptable to wait even more than few seconds to understand if the code added breaks the build or not. So even for short builds (in the range of few minutes) it's advisable to use the IDE to help improving immediacy (by shortening feedback loop back time), albeit without diminishing accuracy.
One way to do that is to use the IDE to execute part of the whole build process locally, possibly in the IDE itself.
Taking Eclipse IDE as an example, the following is a list of tips that may be used to gather immediate and accurate feedback.
  • Use the JUnit plug-in to run the test just written until the code that passes it is complete (well, no new news here, I am sure most of us do this already). The caveat in this tip is to make sure that the result is accurate; way the plug-in runs the tests is equivalent to the way the CI executes them, that is via – typically – an Ant script. Whilst this is immediate for simple tests and code structures, it gets very hairy when frameworks are used (for example SpringFramework) which require configuration files in the classpath and/or external library dependencies. So the first thing to check is that the classpath in the Ant on which JUnit runs is the same as the one in which the test runs in Eclipse. One caveat to this approach is that Eclipse IDE is limited to a single classpath per project so it's less flexible than Ant, where order and content of a classpath can be more fine grained. So typically any “target” in an Ant build script should refer to a “project classpath” path definition that refers to the same set of directories and files in the Eclipse project classpath and in the same order.

  • Use code metrics plug-ins to run metrics on the fly on single files and incrementally on packages and on the entire source tree. The rationale is that in a short TDD loop the number of files touched is minimal, so running the metrics in that subset first and eventually run them on larger sets, improves the chances to find problems sooner. Eclipse IDE can be enriched with all sorts of plug-ins, including those for Checkstyle, FindBugs, JDepend. And to gather accurate feedback it's necessary that the configuration files for each one of these tools/plug-ins is shared between Eclipse IDE and the main build – it's usually good practice to check the configuration files in the same code tree so that they can be easily referred from the IDE and Ant.

  • Use the Ant plug-in in the IDE to execute single or set of targets in the IDE. Accuracy is guaranteed by having the Ant environments in the CI and in the IDE equivalent – this includes also redefining the Ant Home in the IDE to run the same version of Ant in all the environments. Running Ant in the IDE rather than in a shell offers the limited advantage of the IDE support on parsing the output from the build (context sensitive help and hyper-links to the source code).


There are plenty of other clever usages of the IDE features that can drastically improve feedback gathering; some of them are specifically for Eclipse, others exportable to any other IDE as the ones described above. In all examples, though the key message is that immediacy must not be achieved at expenses of accuracy (as my friend Paul would say "feedback is only valuable when it's accurate").

Thursday 11 September 2008

Cultural challenges

Train Station, Coffee shop (I won't mention the name), 8am of a typical commuting day

Barista: Yes, please!
Me: Can I have a single espresso, please?
Barista: Sure!
(Barista goes to the coffee machine, but suddently comes back)
Barista: Sir, sure you want a single espresso? The Double espresso has the same price.
Me: Yes, I am sure, thanks.
Barista: (Baffled) Uh... Ok!

(Although, the most upsetting thing, after this conversation, is when the cup is 3/4 full)

Wednesday 27 August 2008

Agile 2008 - Report

As promised a few days ago, this is my report on the sessions I attended at Agile 2008.

Before the comments on the session I attended, I would like to mention that the whole conference did not blew me away. I mean, it was very good: the hotel, the location, the people, our active participation with two talks and all of that. But I had every day the feeling that all of that was too much. I am not used to group gatherings of this size so that may explain. Anyway, there were great sessions going on, some of which I could not attend for work commitments. And this time around I decided to attend different types of sessions, outside my comfort zone.
I have learned a lot from attending the sessions described below and - not surprisingly - I came back home with more questions than answers. Happily I am still in touch with some people I met at the conference following up ideas initially discussed there. So thumbs up for me over all to the whole experience. And Toronto is a great City.

Keynotes

"The Wisdom of Crowd" by James Surowiecki

Without a doubt this was an interesting opening session. It clearly was aimed to support the speaker's book The Wisdom of the Crowd, but non-the-less new news for me. I got interested in collective intelligence when I read Programming Collective Intelligence, aimed to a more technical audience. This keynote gave me the possibility of know of another aspect of the same topic: how to extract information from a group of people when they are identified as an entity on its own. The crowd, that is.

The common way to extract information from the crowd is by aggregating information coming from the single entities in the crowd. One crucial aspect - very well discussed in this speech - though is how to make sure that "crowd" is built to maximise the chances that the aggregation of ideas can generate new information: this is a case of "the total is bigger than the sum of all the single parts"

So, the key point is that under the right conditions a group of people can be effectively intelligent, contrary to the commonly perceived thinking that crowd are volatile and stupid. Obviously the catch is to understand what the right conditions are.

The right conditions are "diversity" and "independence".

Diversity has to be understood as cognitive diversity, not sociological: different approach to problem solving, different levels of skills and technical capabilities, as opposite to race, sex or social class. The speaker refers to different anecdotes (taken from the book) that prove its hypothesis, inferring also that diversity is even more important in small groups. In essence, homogeneous groups tent to hide flaws as ideas are always analysed in the same way (that is solutions are found by agreement rather than by conflict). In these cases an approach could be to have someone playing "Devil's advocate", with the aim to look at ideas and outcomes from a non standard perspective.

Independence is as much important as diversity. Looking at the crowd as the entity providing the most accurate and effective solution to a given problem, does not mean that the individuality is not important. Individuals must be independent and be able to express their best in any given situation. If this doesn't happen the aggregated results are biased and possibly wrong. This may happen because by nature individuals tent to imitate each other or because organizations punish diversity.

So overall an enlightening speech that I can relate to on my day to day job.

“Quintessence” by Robert "Uncle Bob" Martin

This session happened just after the Thursday's gala dinner. I must say that Bob Martin is a character. On the content of his session, the main points were related to how Agile is implemented in the real life and on how the developer community should behave professionally and push back to those who want them to deliver crap code on time.

About agility, the focus was on XP and Scrum and how - especially at the beginning - the two school of thoughts differed. XP more strict and guided, Scrum more relaxed about actual software development practices. He did an interesting experiment during the session. He asked the attendees to raise their hands. He then started to enunciate each of the XP practices asking people to put their hands down if they weren't following the specific practice.

I reckon we were over 1000 people (if not more) and at the end there must have been 10 people with hands up. I think the point o the experiment was to show that we are at a stage where people blend practices and principles of Scrum and XP to make what works for them.

The other crucial point made was on developers having to put courage on the line and never compromise on good code for new features to accommodate insane deadlines or scarce resources. "We value good code over crap" was suggested as another (unlikely) entry in the Agile Manifesto.

So a very interesting pitch full of truth and controversy. I can not disagree with most of the talk. I still have to find though a place where developers have such a discipline that they can draw a line between over-engineering, quality, speed and alignment with business goals.

"The Wisdom of Experience" by Alan Cooper

The closure keynote was nothing new to me having read The Inmates are running the asylum. Same message as Uncle Bob's for quality over quantity. The speech was full of anecdotes of teams and companies shipping "on time and on budget" bad and unsuccessful software. The key is "take your time and produce good and complete software" that people can enjoy using. The way to nirvana seems to complement agile principles. He's suggesting a typical lack of good software design (especially interfaces) and he's proposing to spend and invest on good and correct design even before a single line of code is written.

All the second part of the talk was dedicated to advertise "interaction design", the new speciality he is proposing and how this can bridge the gap between technical skills and business skills. An interaction designer knows what people want and how to make software usable and easy to understand. Interfaces should not be designed by developers. Users on the contrary should not be allowed to create stories directly to the developers. It is job of this new type of designers to filter and adjust to have the two groups get together for a better solution.

He's got a point: we all know of crappy software with poor user interfaces or with incomprehensible behaviour. Whether fixing this requires a full time job and a specific career path - the way he envisages - is still to be seen.

Sessions

Agile contracting - Rachel Weston, Chris Spagnuolo

I attended this session with the hope to know more about contracts for collaboration between a client and a contractor adopting agile methodologies.

I got a set of problem statements with some strategies on how to mitigate the risks and a bunch of statistics trying to show how agile is better than waterfall.
By the way, I have an issue when agile is sold using numbers like 93% increased productivity or 83% improved satisfaction versus a traditional waterfall approach that gives 35% successful projects. Firstly must be made clear - even more than the numbers - on what conditions these numbers are collected. Second, even thinking about statistics, saying that only X% of projects adopting waterfall is successful doesn't mean that the next project has the X% probability of being successful. The number of independent events that may occur and the people involved are completely different from project to project that the assumption is mathematically wrong. Hence using it as selling point is... incorrect.

Anyway, the most important takeaways from this session were an overall understanding of what problems contractor supporting agile do face. Mainly
  • having to compete with "non agile" competitors, who may offer a more sound sense of security in delivering quality software on time.
  • having to deal with customers who do not embrace agile or that are not fully supportive or that don't have the means to keep up with the fast agile pace
  • having to provide at bid time of predictability on scope and schedule
  • having to match customer expectations and internal financial department as for invoicing, payments and costs.
Strategies suggested, directly coming from the experience of the speakers were mainly focussed on
  • keeping the contract simple.
  • engaging and educating the customer to the agile practices and explain the technicalities.
  • clearly state the responsibilities of both parties (customer and contractor).
  • work out from past velocity charts what can be done, might be done won't be done.
  • agree with the customer on sharing the risks: loosely define scope when schedule and resource are fixed or instead of the classic time and material.
  • make sales people responsible of the performance of the contract, so they're focussed on knowing what they're selling.
Overall a good session, especially for beginners, with few pointers to think about. Surely, the devil is in the details, but overall the same message applies here: it's all about people collaborating and communicating effectively to achieve a shared goal.

Crafting User Stories – Four Experts and the audience weigh in

I ended up to this session with lots of hope. When talking of user stories it's always easy to come up with some techniques good on paper but not effectively reusable in real life. I must admit, I didn't get the answers I was looking for, but the big plus was to observe 5 experts in the subject matter (dis)agreeing on some concepts that maybe some time ago were considered heresy.
Each expert in the panel was asked to introduce himself and give an overview of their approach to user stories. The moderator then asked attendees to come up with questions for the panel to debate/answer.
I will here report my learning points.
  • it’s important for everybody in the team to understand where a story fits in the picture
  • the format in which stories are formulated is not important as long as few key parts are somehow present: what the story is all about, what it costs to have it implemented (whatever the unit of cost is), who’s benefiting from the implementation of the story and what benefit the story is going to provide, how can stakeholders know when it’s done and to what extent. Also important for me is to know “who” can accept it.
  • there’s no harm on having stories in the backlog specifically tailored to solve technical problems, as long as the points raised above are clear in the story.
Leading agile teams - Mike Griffith

This was a 90mins workshop for agile leaders aiming to learn leadership techniques. The speaker took the audience to a journey where all the aspects of leadership were analysed, especially those revolving around the team leadership.
One of the most important points raised was the importance of maintaining vision of what the team is building and why.
The speaker suggests using the “the product box” technique. The team is supposed to come up with a cardboard box (possibly to be available in the office) representing the product with a name, a logo and its main features. The intent is to maintain focus on what the team is building by means of a real physical object.
Techniques aside, the main learning point on this was that vision is so important that job of the leader is to make sure that it’s consistently updated and propagated to the team regularly.
It was also stressed the difference between managing and leading (the latter being build on the former) and – interestingly – the job of the leader is to build a team that eventually behaves like the “Orpheus orchestra” – which plays without a designated “Maestro” or like the Canadian geese, who make of self organisation and team work a matter of life and death.
For more information on the subject see http://www.apln.org and http://www.leadinganswers.com

Memorable mention for

Wednesday 20 August 2008

RestFixture is now available

I have made the RestFixture (discussed in this post) available on GitHub here: http://code.google.com/p/rest-fixture/.

You can get it, use it and modify it under LGPL terms.

Feedback is more than welcome, of course.

Sunday 10 August 2008

Agile 2008 - part 1

Oh yes, Agile 2008... here I am, just landed from Toronto. I am just reshaping my notes. I'll be writing (as I have also done in the past) my comments on the sessions I have attended and presented, and about old friends I have talked to (Rick, Rachel, Angela, JB, Steve) and new people I have met. Stay tuned.

Saturday 2 August 2008

Get FitNesse with some Rest

UPDATE: code moved here: http://github.com/smartrics/RestFixture

UPDATE II: Thank you to Steven Haines for the clear and concise Howto giude on the RestFixture.

I am currently involved in building a REST API. Our direct customer proxy (the architecture team) needs to "understand" how we implement the agreed acceptance criteria and it has also asked to document the API.
FitNesse is our tool of choice for writing functional Customer Acceptance Tests; we currently use it to implement this type of tests for other parts of the system we're building. So it came natural to start using it for documenting the API.

At first we implemented the tests using essentially ActionFixtures. The approach adopted consisted of "wrapping" the REST API in more descriptive metods that could be pressed, checked and entered.

After the first dozen of tests it was apparent that this approach was not ideal. Tests were not clear enough and maintaining the fixtures was hard and time consuming as the amount of code duplication was high for any standard. New tests required us writing more code and, in fact, we were testing just the behaviour of the backend without giving enough exposure to the API itself - consequently, our tests were not descriptive enough to work as live documentation.

So I decided to write a new "type" of fixture, inspired by the ActionFixture, the RestFixture.

The core principles underpinning the decision to write a new fixture were the following:
  • For documenting a REST API you need to show how the API looks like. For REST this means

    • show what the resource URI looks like. For example
      /resource-a/123/resource-b/234
    • show what HTTP operation is being executed on that resource. Specifically which one fo the main HTTP verbs where under test (GET, POST, PUT, DELETE, HEAD, OPTIONS).

    • have the ability to set headers and body in the request

    • check expectations on the return code of the call in order to document the behaviour of the API

    • check expectation on the HTTP headers and body in the response. Again, to document the behaviour

  • I didn't want to maintain fixture code. If I could only write the tests...
  • I wanted to be able to let the customer proxies to write the tests... And they understand Wiki syntax more than Java.

The RestFixture at a glance

The RestRixture is an ActionFixture, therefore all the ActionFixture goodies are available. On top of that it contains the following 7 methods:

  • header: to be able to set the headers for the next request (a CRLF separated list of name:value pairs

  • body: to allow request body input, essential for PUT and POST

  • let: to allow data from the response headers and body to be extracted and assigned to a label that can then be passed around. For example our API specifies that when you create a resource using POST, the newly created resource URI is in the Location header. This URI is necessary in order to perform further operations on that resource (for example, DELETE it in a teardown).

  • GET, POST, PUT, DELETE, to execute requests.

Each test is a rown on a RestFixture table and it has the following format:
|VERB|uri|?ret|?headers|?body|

  • VERB is one of GET, POST, PUT, DELETE (at the moment we're not supporting HEAD and OPTIONS)

  • uri is the resource URI

  • ?ret is the expected return code of the request. it can be expressed as a regular expression, that is you can write 2\d\d if you expect any code between 200 and 299.

  • ?headers is the expected list of headers. In fact, the expectation is checked by verifying that *all* the headers in here are present in the response. Each header must go in a newline and both name and value can be expressed as regular expressions to match in order to verify inclusion.

  • ?body it's the expected body in the response. This is expressed as a list of XPath expressions to allow greater flexibility.


Some examples

The following pictures are snapshots taken from FitNesse with the aim of providing examples of usage of the RestFixture. The notes before each test explain the details of the fixture itself.








It's worth noticing that, generally, expectations are not matched by string comparing the expected value with the actual value. Expectations on headers are verified by checking that the expected set of headers is included in the actual set of headers, similarly with the body where expectations are matched by checking existences of nodes for a given path.

Conclusions

The RestFixture allows to write FitNesse tests without having to write any Java code to back up the fixtures. Tests are clear and easy to read/write and this obviously improves their readability. Therefore it works well for documenting the API and the behaviour of the system under test. Looking it from another angle, the fixture, essentially, implements a REST DSL, that allows customers to write tests on their own.

Sunday 6 July 2008

Groovy for XML transformation

I have been playing with groovy recently, specifically with the MarkupBuilder, the groovy native support for markup languages. It basically means writing XML using native groovy syntax. Pretty neat.

In my job I have more often than not, have to write software to integrate two or more systems. This, often, means writing code that reads an XML stream onto a Java object for manipulation and eventually writes the result into another XML stream.

This typically involves creating binding objects for the input XML and the output XML and then a sequence of calls to getters on the input object and setters on output object to implement the mapping.

This task is very error prone and tedious. A good solution is using the MarkupBuilder (and, yes, I know about XSLT, but this is not the point here!)

So, suppose that you have the following POJO

public class Contact {
private String id;
private String name;
private String surname;
// ...
}
and you want to serialize the instance

Contact c = new Contact();
c.setId("123");
c.setName("John");
c.setSurname("Bloggs");
into the following XML

<contact>
<key>123</key>
<firstname>John</firstname>
<secondname>Bloggs</secondname>
</contact>


Note the difference between the POJO attribute names and the XML tag names

Using groovy and its MarkupBuilder, the first thing to do is to create a groovy converter, a class with a method that reads the data from the bean and produces the XML. Create a file src/groovy/ContactConverter.groovy with the following code:
class ContactConverter{
def convert(bean){
def writer = new StringWriter();
def xml = new MarkupBuilder();
xml.contact {
firstname(bean.name)
secondname(bean.surname)
}
writer.toString()
}
}

If you execute the code
println new COntactConverter().convert(c)

in a groovy shell, you get the XML shown above.

The next step is then to make this code executable from your Java service.
You can use this class (an adaptation of the code available in the groovy documentation):
public class GroovyConverterInvoker {
public String invoke(String fileName, Object bean){
GroovyObject groovyObject = createObjectFromStreamName(strName);
String result = (String)groovyObject.invokeMethod("convert", new Object[]{bean});
return result;
}

public GroovyObject createObjectFromStreamName(String fileName) {
ClassLoader parent = getClass().getClassLoader();
GroovyClassLoader loader = new GroovyClassLoader(parent);
Class groovyClass;
File f = new File(fileName);
try {
groovyClass = loader.parseClass(f);
return (GroovyObject)groovyClass.newInstance();
} catch (CompilationFailedException e) {
throw new IllegalStateException("Unable to compile " + fileName, e);
} catch (IOException e) {
throw new IllegalStateException("Unable to open file " + fileName, e);
} catch (InstantiationException e) {
throw new IllegalStateException("Unable instantiate object for class in " + fileName, e);
} catch (IllegalAccessException e) {
throw new IllegalStateException("Unable access object of class in " + fileName, e);
}
}
}

You can then invoke the conversion:
public String invokeContactConverter(Object bean){
String fName = "src/groovy/ContactConverter.groovy";
return new GroovyConverterInvoker().invoke(fName, bean);
}

You can now use JAXB/XStream or XmlBeans to parse the result of invokeContactConverter and initialize a new POJO for further use, or send the XML on the wire.

The solution can be generalised and optimised to a point where you only need to write groovy converters and load/modify them dynamically when necessary.

Saturday 5 July 2008

Sicilian as a language

I am randomly browsing the web today. I just finished to read a Wikipedia entry on the Sicilian language. I always thought that Sicilian was an Italian dialect, although I was aware on several influences from French, Spanish, Latin and Greek, but I realised that scholars do consider it as a language in its own merit. That makes Sicilians bilingual at least.

Tuesday 1 July 2008

UMLGaph for auto generating class diagrams from your source code

As part of improving the documentation of our software - a REST API - for the benefit of our customer (mainly an architect) and (future) users, we have investigated UMLGraph, a free application that is able to parse source code and (with the help of graphviz) generate class diagrams that can be embedded in the javadoc documentation.

The tool is very good and we're trying now to integrate it in our continuous integration build via ant.

Monday 9 June 2008

BT Web21C SDK wins eWEEK Excellence Award

The BT Web21C SDK has been awarded with the eWEEK Excellence Award under the category "Application Development" as reported here. I am part of the team who built it and I feel proud that 2 years of hard work have been recognized.

Sunday 8 June 2008

Smartrics tag cloud

Go figure why the guys at blogspot have not provided a tag cloud widget. So, kudos to phydeaux3 for supplying the code for my tag cloud. Except for changing the font colors and types to make it look better it was a matter of copy and paste in my blog template. Nice and easy...

Saturday 7 June 2008

The (hyper)reality of social networking

I was reading about hyperreality when I started thinking of how it relates to the social networking phenomenon and communities built around it. Is there a connection between the two? Maybe. In fact someone else has already discussed this topic in this rather interesting post.

My favourite definition of hyperreality is Umberto Eco's one: "the authentic fake". As you would expect the web is full of information about hyperreality, for example this is fairly complete article with examples.

I'll try to write down my thoughts on how social networking community members do live in a hyperreal world. I should warn you though: I may sound cynical and destructive. Far from me: I think there's a place for social networking web sites and services (after all I am blogger too) and there's nothing wrong with being a consumer of these services. As far as I am concerned now I am just trying to understand the mechanics, why they work and why they exist the way they are now, and how they will transform in the future to come. So bear with me on this aspect.

At the core of the meaning of hyperreality is the ambiguity between "real" and "fake" reality that arises when facts and events or objects are filtered by a multitude of media.

Members of communities built around any of the latest and cutest Web2.0 website interact by the means and rules offered by the portal they use (Facebook, MySpace and the like). This medium filters the behaviour adopted by people. Actually, members of any community, in a way, always interact within an accepted framework: think of the rules, implicit (values, culture) or explicit (laws), that manage our relationship in a social environment and how they drive our behaviour.
The main difference, though, is that people meeting face to face have other means (environment, body language, time...) that contribute to maximise the possibility that facts and events are perceived and understood correctly and close to the reality.

Now back to the social networking. As said the web shapes the behaviour; there's also in the mix the fact that content and messages are exchanged using a different time pattern compared to normal face to face relationships. A message, a profile, a blog post, a twit is sent to the server and then delivered to the intender recipient(s). The recipient(s) reads it, digests it, formulates a response and eventually sends it back. This, to me, contributes to the creation of a world and of a reality that is transformed: what the recipient perceives is the autentically "fake" reality that the sender wants to transmit having lost its immediacy and being tailored to the media chosen to operate upon. Besides the fact that sender and recipient know that the content being generated is public.
I like to think that communities in Facebook (I am not picking on Facebook, it's just an example) are equivalent to the community of house mates in the BigBrother house (or, for argument's sake, to any other reality). The media, the place and being observed changes the behaviour allowing the community (participants and viewers) to create a fake real world.

So, it seems that social networking is son of its time: it has been made possible by new technologies - higher network speed, new web site capabilities, etc - and it's expression of our current time where appearance rules.

I have a memory from my infancy: if you're Italian or you watch Italian TV you know about the Mulino Bianco commercial (this is one of the many) which epitomises the (hyper)real world. We (the community of consumers) are let to believe that that is a beautiful real place with a real family where - obviously - plumcakes and brioches are soft and tasty.

Is there any difference between the Mulino Bianco ad and any of the profiles in Facebook? Let alone the style and the manufacture - Facebook users may not be professional advertisers - the communality is that both are hyperworld where the authors want the recipients to walk in and live the life they want them to live.

In this context let me fool around. If hyperreality is made of hyperreal facts and hyperreal events that exist in a hyperreal world, then Twitter allows users to exchange hyperreal facts; MySpace and Facebook as hyperreal multi-worlds; SecondLife as real hyperreal world.

The astute reader may now say that hyperreality is just a play on words, people make real money with this sites, businesses and enterprises back them up. It depends, it's a matter of prospective. Vegas and its casinos is extreme hyperreality made true. Punters entering a casino get in a manufactured hyperreal place where everything is obviously fake to let you believe that our money is fake too. Incidentally, those who think that your money isn't fake are casino owners who live in the real reality of gambling business.

So, who's making real money in this hyperreal world? I guess those who are able to spoil the user data by aggregating it and extract useful market information, for example. I suspect that biases will eliminate each other when aggregated, providing the clever data manager with information to use at his like (hopefully within the boundaries of the data protection acts).

A real example is available here: it describes how Amazon makes money using information extracted by user generated data.

Yes, clearly this is a win-win situation. Community members get value by being able to cultivate their social aspirations, voyeurism, need to be heard, trend and fashion, benefiting of other members' experience. Providers - on the other side - are able to target established communities by processing generating data, aggregating it with the purpose of extracting financially useful information (read the Facebook case study).

I am sure there's more to say. I stop now but I am interested to hear from you, dear reader, your view on this.

Covering the obvious

I was listening to Led Zeppelin whilst doing the restyling job at home and acknowledging how popular some of their songs are. Obviously Stairway to Heaven is amongst those... so I started wondering how many covers have been published in almost 40 years. I started digging the YouTube website, but, funny enough, at the same time I found that Ernesto Assante - a famous italian music critic - did it all for me by listing on his blog the most unusual covers of that masterpiece. The one I like amongst those he's listing is the version of Robyne Dunne, very 80s' and colorful. Here it is:


Saturday 31 May 2008

Agile restyling

Whilst my wife and son are away I took few days off at work and started to mess with the floor and the walls in the corridor of my house.
I am a brave man... with the help of a DIY book, some imagination and memories of a decorator who worked in my parents' house 20 odd years ago, I started taking off the horrible green carpet I found when I moved in. Surprise! Under the carpet there was an even more horrible black and red vinyl tiles floor stuck on the floor boards. Three days of hammering off the tiles inch by inch with a chisel didn't do the job: I had to hire a floor sander and use 7 P24 - amongst the coarsest - sand paper disks to get rid of the mixture of plastic and glue stuck in the 6 square mt area I am working on.
I took off the skirting boards and a portion of the wall plaster and redid all the necessary fixes to modernise the look and feel. I started four days ago and am now at the stage of painting all the surfaces i can pass a brush on.
Plenty to do and little time! No new news then.

I am learning though... a lot of things! I am practicing day by day (and mistake by mistake) new manual skills: lying undercoat and plaster, sanding, varnishing, painting and I haven't even started with the wood. But I am (re)discovering how to relax by doing manual activities: I am on my own, I am not under pressure to finish (yet!) and I can take my time whilst listening to my music. I am also fooling around by trying to apply agile techniques to my day to day activities. Ok, there's a conflict of interest (I am the customer and the development team) but I avoided a big upfront design and I am organising my activities to best respond to changing circumstances - this in fact has helped: I found on Friday that I have a meeting in London on Monday that I don't want to miss and this has forced a re-org of my activities.
In practice, I have a list of things I need to do and it's comprehensive to the best of my knowledge to date. I keep adding (or removing) to-do items as and when they come and re-prioritise the list every day: I pick the next to-do item from the list and do it, depending on what makes sense doing (I am the customer after all), what I fancy doing (as I am the developer too). I tend also to organise the list to minimise waste (of time mainly) taking into account task dependencies (take off the carpet before lying the new floor) and minimise the risks (paint the wall before lying the floor, because the paint may drip on it!)
The only visible side effect is that I need to go frequently to the nearby DIY shop to buy stuff. Not a big deal though, it's only 2 miles away, and it's also an excuse to see the cloudy sky.

PS: amazingly, when I took one of the skirting board off the wall, I found a little book published by Lloyds Bank with the list of cash points in the whole UK - all in 6 pages, mind you - and a 2p coin. Respectively dated 1973 and 1974.

Uncle Pino

Sadly today my uncle Pino passed away. He was a fine musician who played guitar and bass and toured the world with many famous Italian artists including Domenico Modugno and Nini Rosso. He also published records: the one I can very well remember is titled Marmalade, back in the late 70's (copies are on sale on eBay.it at the time of writing).
I still remember the story that granma Teresa used to tell me, when I was a little boy, of him leaving Sicily when he turned 18 to fulfill his dream of being a musician.
And I'll be always grateful to him for having bought in Rome in March 1985, on behalf of my parents, a - very much desired - Commodore 64: my parents' present for my birthday. (That machine started my passion for computers and programming).

The Boss is in town!

Great show! Probably the best I have seen in years. Bruce Springsteen and the E Street Band have played at the - almost full - Emirate Stadium in London a 3 hours gig with a mixture of old and new stuff. You won't believe how that 62 years old man was able to excite the crowd running up and down the stage shouting and playing Badlands, Born To Run, Thunder road (my favourite), Rosalita, Lonesome Day and so on.
He also had time to pass on to us a couple of (silly) jokes about English stereotypes: the weather - today was a sunny day after a miserable week of clouds and rain - and T time. Anyway... I took two pictures with my mobile, one before the start of the concert when the stadium was filling up, the other when it was dark already so the quality is pitiful, so you'll have to guess.



Friday 30 May 2008

Fighting cot death and side effects

It's today's news that some cases of cot death may be caused by bacterial infections.

If you have a child it's likely that a pediatrician (or equivalent) has told you about cot death - the death of an infant that is apparently inexplicable.

When my son Jacopo was born (19 months ago) we were told when we left the hospital just after he was born. We were also told few things to do to minimise the chances that this horrible situation may occur to us: don't smoke nearby your newborn, don't let him sleep in our bed and - above all - put him to sleep on his back (an not on his tummy, as commonly practiced until about 15 years ago).

All nice and easy. But...

But nobody told us of a naughty side effect of one of these simple practices. If your child always stays on his back (and most of his time he/she does as babies sleep more than 12h a day - also, our one actually disliked being put on his tummy even when he was awake) he/she may develop the flat head syndrome. This is a cosmetic syndrome whereby the soft skull of the baby flattens. It's - clearly - caused by the fact that the baby lies face up with the back of his head on a not so soft surface (soft pillows should not be used with newborns).

As said we weren't told and Jacopo developed a minor flat head on the right side of his skull. We were told by the pediatrician, after we realised it when he was 9 months old, not to worry, that it'll normalise by the time he's two (it hasn't yet, and we're 5 months away from his second birthday) and that this might have happened. We were told of tricks to play to encourage him to turn his head more frequently - move toys from one side of the bed to the other - and optionally to buy a specially made pillow. In the most extreme cases an helmet may be required but it wasn't our case.

Well, to the point... why nobody told us of what might have happened? Obviously preventing cot death is of highest priority, but making parents aware of side effects of having the baby sleep on his back is also important. So, if you're in the same situation we were a year and a half ago, invent something and have your baby move the head when he's awake. He'll be grateful when he'll grow up.

Wednesday 28 May 2008

Agile 2008 - we'll be there!

We are finalising our work for Agile 2008. I led two teams that submitted two sessions at the conference this August in Toronto, Canada. We had them accepted and the two papers will be published in the proceedings. We're now in the process of writing a summary of the sessions for the conference agenda. Here they follow

What's in the toolbox of a successful software craftsman?
Have you ever wanted to know which tools a big distributed team of successful software craftsman use to implement their user stories? How they configure them to support agile development based on XP and Scrum and deliver to the agreed plan? This session will answer these questions and more. Three representatives of this team will tell you what’s in their toolbox and how the toolbox supports four core agile practices that the team adopts to succeed: maximum project status visibility, effective communication, immediate feedback and ruthless automation.


Pushing the boundaries of testing and Continuous Integration
In this session, three representatives of an agile team will show how an automated build that executes robustness, scalability and performance tests helped them drastically improve the quality of their highly concurrent application server. They will also show how the team configured such builds in their continuous integration environment as well as what performance and robustness metrics they monitored. Finally, the team will show how valuable and effective this investment has been for capturing bugs and performance-related issues very early in their development process.


I look forward to get there: I'll hopefully meet people I already had the chance to talk to in the past (Kent Beck and JB) and people I only recently had the honour to collaborate with (Ron Jeffries and Manfred Lange - who reviewed our papers for the conference).

I'll keep you posted!

Saturday 24 May 2008

And now a public blog

Subtitle: do I really need to do this?

Hello reader! Welcome to YAUB (yet another useless blog). I have decided to join the public blogsphere. The more obvious reasons are that I hope to practice my writing and use this tool to "remember" and "share" stuff (scripta manent, after all). Specifically things that I happen to be doing during my life. About the real motivation, don't ask, I am still looking for it: I suppose it all has to do with the fact that humans are social animals and that they achieve well being by participating to social life and the like. We'll see how it goes...

Actually, I have been publishing posts in a blog in the intranet of my company for few months; now that I am doing the leap I'll migrate my old posts soon so you can enjoy them too. So you'll eventually see posts that are older than this one once I finish my copy and paste job.

Cya!

Tuesday 29 April 2008

Microsoft and Open Source

It's very interesting to note the Microsoft's involvment in the Open Source (or open source) arena.
I came across this presentation held by Sam Ramji - director of the Open Source Lab at Microsoft - at the latest EclipseCon. It shows how Microsoft and Eclipse Foundation are working together on making Higgins and CardSpace interoperable and on building SWT on top of WPF (to let Eclipse shine on Vista).

One thought coming on my mind: how long is it going to take to have Visual Studio as an Eclipse plug-in.

Friday 18 April 2008

Planning in Cork and Cyclomatic Complexity

I am in charge of prepping the Day0 happening in Cork, for the next Release Planning of the Web21C SDK Team.

A bit of background: Day0 tradition started a year ago (at the Dublin Release Planning) as an opportunity for the whole team members (located between London, Ipswich, Denver in US) to meet and share knowledge. It happens the day before Release Planning - hence the name - and involves technical presentations and discussions on topics selected by a small number of members and presented to the whole team in - typically - two or three streams.

The process is quite interesting as we're organising the event as a mix of Open Space and Unconference. I have asked people to sign up for sessions to present and for topics to discuss at the Gold Fish Bowl at the end of the day. (Yeah, I know that a preset agenda is not truly Open Space, but it'll help me to manage the hotel resources and keep people focussed, whilst having the attendee to comment and provide feedback to the presenters - and whoever has something to say on the day can book his session at the start of the day on a properly prepared sheet)

I am confident that the event will be as successful as the one I organised in Glasgow last January.

I am even attempting to present two sessions although I have the feeling it's going to be too much, being now involved on preparing the two papers for Agile 2008 the release of the Messaging capability at the back of the Web21C SDK next 28th of April and the Brown Bag at the end of May.

Anyway, one of the sessions I was thinking to present is on software metrics and refactoring complexity out of code. The goods the bads and the evils of metrics and how they work as triggering alarms on spotting complexity in the design and implementation. I am thinking how to best make use of those numbers without being too mental on making them nice and round. Especially the Cyclomatic Complexity, which I find quite useful as an indication of the complexity of the code. In fact with Eclipse and the Checkstyle plugin, it's possible to get the number on the fly. Another useful application of the McCabe number (the other alias of the CC) is to get an indication on how many tests are required to fully cover untested code - useful clearly when you inherit untested code.

I'll think of writing a more extensive post once I finish the presentation

Euro and Pound signs in Java

Never use € (euro) and £ (pound) symbols in a Java src file. If you do you're asking for trouble. Rather use their unicode prepresentation (and possibly a comment telling readers what that unicode represents), that is:

public static final char EURO = '\u20AC';
public static final char POUND = '\u00A3';

The main problem occurs when your src is meant to be managed in a windows platform (where encoding is by default Cp1252) and in a Linux platform (where encoding is by default UTF-8).
Unless you have to, then, don't use them. If you really have to, the option is to share between the platforms the same encoding.

You may need to consider

  1. -Dfile.encoding
  2. Use the same encoding in both platforms by passing to the ant the encoding attribute
  3. Change the default workspace encoding in eclipse (windows/general/workspace) and set it to UTF-8 (bear in mind that then your € and £ wont be visible anyway)
Obviously this happens for any other outside US-ASCII char that is not represented in UTF-8

Tuesday 18 March 2008

Sacrificing quality?

Not compromising on quality is not only your professional obligation but it is also important for your own joy of work and is critical for the company. (Ken Schwaber)

My take on this is that compromising by skimming on test-dev-refactor loop (by eliminating refactor and/or test) is BAD. But working with the customer and negotiate a delivery of a Fiat 126 rather than an F430 at the end of the current release is something to encourage if resources and time are tight, in the spirit of the pure iterative approach. After all, if the business problem is "I need to drive from home to work", that is a perfectly valid solution to it.

This is what Jeff Patton was talking about at the last XPDay in London when he was talking about sacrificing quality if time/resources are scarce.

Monday 18 February 2008

Babylon IT

Interesting page on computer languages. It shows a (maybe) complete list of existing computer languages (alive and dead) and their main features. Quite unsurprisingly there are languages I have never heard of. And also some with amusing names, like Pizza. Also a good source for links to external websites.

Monday 11 February 2008

Java top 5 technologies to learn during 2008

I receive regularly emails from several technology sites. This time one carried an interesting link to a blog entry describing the top 5 technologies to learn in 2008. Some interesting ones, a side the usual web 2.0 ones, are OSGI (at #5) and Cloud computing (at #1). I have been exposed recently to cloud computing and grid computing doing scalability work for SpringRing (now Aloha). But, I must admit, I was not aware of OSGI.
Oh dear, something else on the stack.