Monday, July 15, 2013

PL/SQL Coding Standards, revisited

Formatting is the least important aspect of Coding Standards. Unfortunately, most sets of standards expend an inordinate number of pages on the topic. Because:

  1. The standards are old, or the person who wrote them is.
  2. Code formatting is an easy thing to codify and formalise.

Perhaps the source of most wasted energy is formatting keywords. Back in mediaeval times, when the only editors in use were vi or Notepad, or perhaps PFE, this was a pressing issue. But modern editors support syntax highlighting: now that we can have keywords in a different colour there is much less need to distinguish them with a different case.

Personally I prefer everything in lower case; I save about 23 seconds a day from not having to use the [shift] key. But other people have different preferences, and for the sake of the team it is better to have all the source code in a consistent format. But the way to achieve this is with automation not a Word document. SQL Developer, PLSQL Developer and TOAD all have code formatters (or beautifiers, yuck) , as do other tools. Let's put the rules into the machine and move on.

What should the rules be? Well, everybody has an opinion, but here are my definitive PL/SQL Coding Standards, with an addendum of formatting guidance.

APC's Damn Fine PL/SQL Coding Standards



  1. Your code must implement the requirements correctly and completely.
  2. Your code must have a suite of unit and integration tests (preferably automated) to prove it implements the requirements correctly and completely.
  3. Your code must implement the requirements as efficiently and performantly as possible.

APC's PL/SQL Code Formatting Guidelines



  1. Case. ALL CAPS is Teh Suck! Anything else is fine.
  2. Indentation. Align consistently. Spaces not tabs. Four spaces is the Goldilocks indent.
  3. Short statements. One statement per line.
  4. Long statements Use line breaks, don't make me scroll.
  5. Naming conventions. Use prefixes to distinguish local variables, global variables and parameters from each other and from database objects.
  6. Comments. A comment is an apology.
If you prefer something less minimal, William Robertson's PL/SQL Coding Standards remains the most complete and best annotated set on the web. Okay, so he does specify "3 spaces for each nesting level" (why? computing is all about powers of 2) but nobody's perfect.

Sunday, July 14, 2013

The personal is technical

On Friday evening I attended an IT Job Fair at the Amerigo Vespucci in Canary Wharf. Let me say straight away that hanging out with a random bunch of techies and recruiters would not be my first choice for a Friday evening. But, hey! I'm looking for my next role, and right now the market is too tough to turn down opportunities to find it. Besides I was interested to see whether the Meetup template would translate into a recruitment fair.

On the day the translation was a mixed success. Unlike most Meetups, which can work with any number of attendees, a job fair requires a goodly number of recruiters, and recruiters will only turn up if they think there will enough candidates to make it worth their while. This first event didn't achieve that critical mass, and I would be quite surprised if I get an opportunity from it. Nevertheless I will try to attend the next event, whenever that may be, because pop-up job fairs in bars are a great idea. Not for the reason you're thinking (I drank cola all evening), but because it was enjoyable. I talked with some interesting people, and got a couple of email addresses as well.

But more than that I was impressed with the concept. The informal social setting is good for understanding what a person is really like, specifically what they might be like to work with. This has to be worthwhile. The CV is a dead letter: it lists skills and accomplishments but doesn't animate or demonstrate them. The formality of the technical interview makes it hard to judge somebody as a person. Anyway it's generally aimed at establishing how much of their CV is true. The social element is often missed entirely. Big mistake. Software is like Soylent Green, it's made of people.

Personality matters: the toughest problems most projects face are political (organizational, personal) rather than technical (except for system integration - that's always going to be the biggest pain in the neck). A modern development project is a complex set of relationships. There are external relationships, with users, with senior managers, with other stakeholders (Security, Enterprise Architecture, etc) any of which can jeopardize the success of the project if handled badly. But the internal relationships - between Project Manager and staff, between developers and testers, or developers and DBAs - are just as fraught with difficulty.

The personal is technical because the team dynamic is a crucial indicator of the likely success of the project. You don't just need technical competence, you need individuals who communicate well and share a common vision; people who are (dread phrase) team players. That's why Project Managers generally like to work with people they already know, because they already know they can work with them.

For a new hire, nobody knows the answer to the burning question, "Can I stand to be in this person's company eight hours a day, five days a week, for the duration of the project?" Hence the value of chatting about work and other things in a bar on a hot summer's eve over a glass of something with ice. I'm not sure how well the model would works for recruitment agents, but I think it would suit both hirers and hirees. It's not a technique that scales, but if people made better hiring decisions perhaps that wouldn't matter?

Thursday, July 11, 2013

UKOUG Analytics Event: a semi-structured analysis

Yesterday's UKOUG Analytics event was a mixture of presentations about OBIEE with sessions on the frontiers of data analysis. I'm not going to cover everything, just dipping into a few things which struck me during the day

During the day somebody described dashboards as "Fisher Price activity centres for managers". Well, Neil Sellers showed a mobile BI app called RoamBI which is exactly that. Swipe that table, pinch that graph, twirl that pie chart! (No really, how have we survived so long with pie charts which can't be rotated?) The thing is so slick, it'll keep the boss amused for hours. Neil's theme on the importance of data visualization to convey a message or tell a story was picked up by Claudio Bastia and Nicola Sandol.   Their presentation included a demo of IConsulting's Location Intelligence extension for OBIEE. The tool not only does impressive things with the display of geographic data, it also allows users to interact with the maps to refine queries and drill down into the data. This is visualization which definitely goes beyond the gimmick: it's an extremely powerful way of communicating complex data sets.

A couple of presentations quoted the statistic that 90% of our data was created in the last two years. This is a figure which has been bandied about but I've never seen a citation which explains who calculated it and what method they used (although it's supposed to have originated at IBM). It probably comes from the same place as most other statistics (and project estimates). What is the "data" the figure measures? I'm sure in some areas of human endeavour (bioinformatics, say, or CERN) the amount of data they produce has gone metastatic. And obviously digital cameras, especially on phones, are now ubiquitous, so video and photographs account for a lot of the data growth. But are selfies, instagrammed burgers and cute kittens really data? Same with other content: how much of this data explosion is mirroring, retweets, quoting, spam and AdSense farms? Not to mention the smut. Anyway, that 90% was first cited in 2012; it's now 2013 and somebody needs to invent derive a new figure.

The day rounded off with a panel and a user presentation. Toby Price opened the Q&A by asking Oracle's Nick Whitehead, how does Hadoop fit into an Oracle estate? It's a good question. After all, Oracle has been able to handle unstructured data, i.e. text, since the introduction of ConText in 8.0 (albeit as a chargeable extra in those days). And there's nothing special about MapReduce: PL/SQL can do that. So what's the deal with Hadoop? Here's the impertinent answer to this pertinent question: Hadoop allows us to run massively parallel jobs without paying Oracle's per processor licenses. Let's face it, not even Tony Stark could afford to run a one-thousand core database.

The closing session was a presentation from James Wyper & Dirk Shelley about upgrading the BI architecture at John Lewis Partnership. They described it as a war story, but actually it was a report from the front lines, because the implementation is not yet finished. James and Dirk covered the products - which ones worked as advertised, which ones gave them grief (integration was a particular source of grief). They also discussed their approach to the project, relating what they did well and what they would do differently with the advantage of hindsight. This sort of session is the best part of any user group: real users sharing their experiences with the community. We need more of them.