Plans for Xmas

December 17 2017

yesterday I heard my little daughter being told by her mom “honey, let’s prepare a list for tomorrow shopping!”. don’t ask me why, but I then started thinking about one of my favourite topic in software development: release planning.

imagine you’re working on an aged software application for your company (eg: an online travel agency), composing products from 3rd parties resellers (eg: flights, train trips, hotel accommodations), adding some extra markup, and selling as bundles (eg: holidays). now imagine that your company aim to add an additional product (eg: additional baggage on flights), available from many of the existing resellers, but not yet part of your offering.

imagine that, from a quick analysis and market research, your company define an initial release around integrating products from a few resellers only (eg: additional baggage from 5 topmost airline companies).

lastly, imagine that your software application has been migrated from a monolith to an (almost) microservices architecture, so that you have dedicate backend components for topics such as resellers (eg: airline/hotels integrations), bundles (eg: markup rules, market-specific composition rules, etc.) and a frontend component for interacting with users.

how would you then approach release planning for the very first project release?

let’s try listing all activities to be performed, such as:

  • collecting products details from resellers (eg: parsing XML or JSON payload in airlines HTTP search API responses)
  • storing new products into catalog database (eg: adding BAGGAGE table vs. adding BAGGAGE type in PRODUCTS table on RDBMS)
  • defining markup rules into bundle database (eg: extending rule engine with new
    BAGGAGE product type)
  • preparing new UI for showing new products details and selecting desired products (eg: simple “baggage-shaped” widget for all available sizes and weights)
  • storing selected products on orders database (eg: full baggage details vs. a reference to available baggages previously collected, stored into ORDER_ITEMS table on RDBMS)
  • purchasing selected product from resellers (eg: adding XML or JSON content to airline HTTP booking API requests)
  • adding purchased products details in confirmation email (eg: simple statement with quick recap on allowed sizes and weights)

that’s a lot of stuff.

there are many risks, probably most related to external service APIs, because they’re not under our control (are those products available to be purchased on test/sandbox environments as well?). but also from a domain perspective, modelling these new products could imply tricky challenges, such as full-details vs. reference storing, reuse existing products and orders details models for sizes/weights as well.

one first approach for planning a release is, well, simply plan those activities in that exact order: start parsing API responses, end with updating confirmation email. in the end, that’s the work to be done, isn’t it?!

let’s call this a “task-oriented” release planning. in fact, that activities smell like programmer tasks: technical, “horizontal” (impacting one layer/component at a time), simple to be performed without additional context (eg: what’s following next). you’ll probably “see” some visible behaviour only at the end of the whole release, or at half of the release: new UI for users and new sentences in confirmation email, which should both be “feature-flagged” off unless the release is complete (but feature-flagged on in QA env or for some test users).

a second approach for planning a release is forgetting about components/layers boundaries and start focusing on users, being that both customers and admins. in other words, focusing on visible behaviour, eventually adding minimal tooling for making it really “visible”. in the example, we could have:

  • dashboard with available products, for admin users (eg: listing baggage that would be presented to real users, for supported companies)
  • new products checkout, for real users
  • selected product purchasing, for real users
  • selected product details in confirmation email

let’s call this a “feature-oriented” release planning. items in this plan are almost functional, “vertical” (impacting many components/layers). risks with this approach is having “long” development activities, before considering “done” a feature. with this approach you’ll learn what incremental development is: adding, one piece at a time.

as a note, I’m referring to the term “feature” as nowadays presented in SCRUM-like release planning: something understandable by C-level executives. I don’t really like the term, I’m used to call them themes (or epics, with no particular distinction even if one exists, by the book). anyway, let’s stick on this for a moment.

one third approach for planning a release is slicing features, trying to keep them as slim as possible, but still vertical. the very first user story will look like to something like this:

  • focus on the very top reseller only (eg: only one airline company)
  • suppor only one fixed product on that reseller (eg: only 1 extra baggage),
  • enable on a small percent of orders (eg: only flights internal to that company main country, where no currency/units conversion is required)

you’ll then move either to more resellers, more products or more orders, according to your company strategic vision and to feedback collected while developing.

let’s call this a “user-stories-oriented” release planning. you’ll probably explore much of the complexities as early as possible, using simplifications to let you go faster. you’ll surely have then change components/layers many times, as you progress in the development. with this approach you’ll learn what iterative development is: rework, from simpler to more defined.

given this long introduction, and example, my very personal suggestion (biased from an XP style for planning) for doing release planning is starting with features (or themes, as I call them), and then move to user-stories. I know, not much original content here (in fact, I’ve been sharing thoughts like this since many years)! sorry mate :)

anyway, two additional hints:

  • don’t focus on tasks (as in the first approach) in the beginning, but let them flow as you’ll start analysing and estimating user-stories. an high-level technical architecture is enough to start planning a release (and later releases as well)
  • don’t have separate release plans, one for C-level people with features, and one for development people, with user-stories. this would require continuous “mapping” between the two. stick on one shared release plan, share on the company wiki, print it and put on walls, update it often

don’t get me wrong, you’ll probably require an higher-level plan for executives, just don’t call it a release plan. I’ve seen referred to them as initiatives or goals, but have not much experience with.

surprised by such a long post, after almost 4 years and an half? don’t tell me, I’m astonished as well! well, let’s consider this my Xmas gift for you!


One step up

February 7 2012

it’s going to take so much longer than I expected for me to achieve a good enough writing about this topic, so I decided to start small, share a few initial thoughts, and collect feedback. abstract: what about design concepts and principles when applied at the system level? I mean, do they still stand? is there anything I can learn from the object level?

I’m playing with architecture these days. yes, playing is the right word. I’m almost in a brain-storming phase, reading books and articles, writing mind maps, and collecting data from current and past projects I’ve been involved with. but the analogy soon came clear: what about good design principles, evaluated as architecture principles?

so, here they come just rough bullet points.

information hiding. don’t let implementation details of a given component pollute the system. should client code be aware of persistence mechanics or sorting algorithms? no, it shouldn’t. this information should be hidden inside the component providing those services. so, at the system level, why do we consider integration databases an option? why don’t we hide implementation details such as relational database systems (DMBS) and query language (SQL dialect)? wouldn’t be better to stick on a shared data format, such as JSON, XML or CSV files?

encapsulation. support cohesion and keep related logic together, near data logic works on. don’t ask for values, but tell behavior to be executed. sure, this is strongly related to information hiding. but, what about system design? what about telling a system to perform a given task, instead of asking for data? public APIs and application databases are recipes you can select from your architecture cookbook.

ports and adapters. achieve an isolation level from external dependencies in application code: publish a clear service interface (port, facade) in the application, and provide an implementation (adapter) to a specific external service or toolkit. this is also known as hexagonal architecture. well, when it’s time to design system dependencies, what about putting an extra layer of abstraction, protecting your system from external systems implementation details? this idea is what Pryce and Freeman call simplicators. this video by Pryce explains the idea further, presenting a system where all external systems are accessed through an intermediate HTTP facade, which completely hides those protocols details and pitfalls.

to recap:

  • share database among multiple applications: don’t! consider databases as object internal state, which you’d probably not expose publicly. would you?
  • export data from application databases for reporting, calculations, and so on: don’t! consider those exported data as object getters, which you’d probably not publish as part of object interface. would you?
  • provide connectors to external systems and toolkits in application: to be fair, not so bad. but consider encapsulating them in one intermediate system

does it sound good to you?
to be continued..

Release the dogs

December 17 2011

I’ve been thinking about this post since a few months, and I’ve now got some more data to share about. the title stuck in my mind while listening to “Tomorrow Come Today” Boysetfire’s album, driving back home after an intense day of working with the team.

it all started in a study session on the XP practices, discussing what was stopping us from applying some of them. for example, Small Releases. the immediate objection was “well, we deploy more than once a week!”. in fact, user stories are moved to production as soon as they’re completed, validated and accepted. so, is this a non-sense question?

this week we kicked-off a new project, which is critical, because it aims to improve the company product’s core. the stakeholders asked for a technical solution that could answer the question “is this worth it? does the company receive any benefit from the new business rules”. so, we agreed on an architecture that keeps the two sets of rules in parallel for a short period, and as soon as we collect positive data, we’ll switch the product to the new system.

it took us two days to come with a Release Plan and an Iteration Plan for the first iteration. we collected themes and user stories, discussed a feasible architecture, then set rough estimates. this initial process showed we could answer stakeholders’ question in less than one month. then, it would take us another month to develop enough user stories to switch to the new system. eventually, we could then focus on the less important themes, which we didn’t even analyzed nor estimated, just tracked down on index card.

to recap, we focused on the smallest chunk of stories that could be put into production in order to bring value as a whole, and a release date has been attached to. then we iterated this process. the result is a three-releases plan, which is being adjusted and adapted whenever we found something new about estimates, technical risks, and the domain itself. we also know the first release it’s about two iteration long. then we know what’s going to be developed for the next two weeks, in much more detail, having discussed acceptance criteria and technical alternatives.

back to the initial study session, I think the problem is due to the fact we talk in Italian. the Italian for “release” and “deploy” is exactly the same: “rilascio”. we say “piano di rilascio” for “release plan”, and “rilascio in produzione” for “deploy to production”. combine this whit the continuous delivery practice gaining more and more popularity these days, and maybe the communication problem gets explained. to be fair, the project we referred to, deployed on production frequently, was more in a maintenance than a developing phase. there was no clear reasoning about planning. hence the confusion.

to recap, and to state a take-away message: don’t confuse release planning with production releases, especially if in your native language the two words sounds similar. my suggestion is to mentally translate to “milestones” for release planning, and to “deploy” for production releases. would this help you?

New books on the shelf

February 6 2011

I’m investingating structures and behaviour at system level, trying to improve my knowledge in that area with a pragmatic approach, doing a few exercises like reviewing and documenting past projects, and organizing concepts and buzzwords around a “design/software architecture/system architecture” scale.

then finally, bought these books..

“Release It!”, which I’ve first borrowed from company’s bookshelf and read in a rush (mostly on a train, going back and forth in a two weeks consultancy in Venice). one of the most illuminating technical books I’ve ever read, on the topic of architecture and quality attributes.

“Essential Software Architecture”, which I’ve discovered while surfing the Sydney University’s Enterprise-Scale Software Architecture class lectures. seems like a really compact and valuable reference book, giving insights on topics like enterprise application integration, messaging infrastructure, middleware and application servers.

“Software Architecture, Perspectives on an Emerging Discipline”, a ’96 book aiming to organize knowledge and patterns on software architecture styles, as perceived in the nineties: client-server and distributed computing, pipes and filter, layered systems, and the like.

“Software Architecture in Practice”, a huge book from Carnegie Mellon’s SEI institute. well, don’t think I’m fully reading this, I was mainly interested in quality tactics, a cookbook to chose recipes from while looking for specific topics, such as scalability or capacity.

well, my reading list got bigger!

these last six months have been incredibly full for me, i’ve learnt so many technologies and technical stuff: RubyOnRails web application development (and a bit of S3 cloud deploying), Hippo CMS 6 and Cocoon pipelines, and now Day CQ stack, which means JCR and Jackrabbit, Sling RESTful web framework, and OSGI bundles with Felix. oh my!

yep, i’m currently working for a big TLC italian company, developing their internal portal based on CQ5. i was completely new to content-repositories and web content management, but i got it quickly: it’s a different paradigm, data are modeled around resources, not around relations (as with relational databases).

btw, what i want to show is my journey with CQ stuff, and how our development approach has grown during the last weeks (and where it’s going). beware: there’s a lot of technical stuff (maven, Day CRX, Apache Sling, Apache Felix); i won’t explain everything in detail, so i’m referring to documentation and other blog posts.

so, first of all, start reading CQ tutorial on “How to Set Up the Development Environment with Eclipse”: please, spend almost one hour following all steps, even boring ones, like grabbing jars from CRX repository and putting them manually into local maven repository. in the end, you’ll have two projects (ui and core), one page with template (manually created and edited), executing a component as JSP script (imported through VLT), which uses “domain” logic provided by a plain old Java class (from core project). that’s a lot of stuff!

then, let’s enter the magical world of CQDE, a customized (old version of) Eclipse, which provide access to remote content (via webdav) from within an IDE, so that you can edit, compile and debug code as it was stored locally (but it isn’t). at first, it seems a lot better than VLT-ing from commandline; but soon you’ll miss it: versioning, and sharing code with others. even if it’s not clear in the tutorial, ignoring VLT specific files let Subversion version also content stored in src/main/content/jcr_root. that’s not always funny, like manually merging conflicts on XML files, but it’s really a lot better than blindly edit code with CQDE, with no way back! also, sometimes i’ve found much more easier editing pages as XML files than using WCM editor (CQ authoring tool).

ok, relax, take a deep breath, and think about what you’ve done so far. do you like it? are you comfortable with this? well, i wasn’t; i missed my IDE-based development, checking-in and out code, running automatic tests all the time. the good news is we can do better than this, the bad news is we’ll still miss something (so far, red/green bars for UI). to recap, we can choose from:

  1. remote coding and debugging, with CQDE: no “native” versioning, VLT can be use as a “bridge” to Subversion
  2. local coding, with any IDE (eg Eclipse): still can’t compile JSP files, VLT used to deploy UI code

next step is (well, i’m a bit afraid, but time has come)… deploy an OSGI bundle with maven, with both UI code and initial content to put on repository.

step one: compiling JSP files locally. ingredients: JARs as local maven dependencies and sling maven jspc plugin.

i could not find any public Day maven repository (and it makes sense, from a business point of view), but as the tutorial shows, everything we need is already available from CRX. so, it takes long, but referring to the /libs/xyz/install convention and doing searches via CRX explorer you can come up with something like this:


function grabDependency(){

  wget --user=admin --password=admin $JAR_URL
  mkdir -p $REPOSITORY_DIR

cd /tmp; rm -rf deps; mkdir deps; cd deps

grabDependency \
  http://localhost:4502/crx/repository/crx.default/libs/commons/install/day-commons-jstl-1.1.2.jar \
  com/day/commons/day-commons-jstl/1.1.2 \

# ... grab other jar files

then, let’s add JSPC plugin to the maven build chain, and CQ and Sling dependencies (see attached file with sample code). this is a simple example; you’ll probably need to override plugin’s sling jar dependencies with versions used by application code!


moving JSP code into src/main/scripts (under apps/myApp subfolder) should be enough to have maven build (mvn clean compile). just remember to grab global.jsp from CRX and put it under src/main/scripts/libs/wcm folder. Eclipse also will compile (regenerate project files with mvn eclipse:eclipse), but it needs another copy of global.jsp into /libs/wcm (i know, it’s silly; i’ll check this next time).

step two: packaging an OSGI bundle with UI code and content nodes. ingredients: Felix maven bundle plugin.

the key concept for me was understanding what to put into the bundle. i was used to have JSP files on CRX under /apps node, editing nodes properties such as jcr:primaryType (cq:Component, cq:Template and the like) and jcr:content. deploying application as OSGI bundle it’s slightly different: code is available as bundle resources (from the bundle itself), while only property nodes are copied from bundle to CRX repository, as initial content. this separation was not clear to me in the beginning, but it now makes sense (even if less duplication would be nice, for example in content structure).

so, we should create a bundle with:

  • included resources: all required resources (maven resources and src/main/scripts folder) to be later referred
  • bundle resources: .class and JSP files
  • initial content: node properties, as JSON files (i decided to put them into src/main/resources, under CQ-INF/initial-content subfolder)

more details are available on the Sling website and on this blog post.

so, let’s add Felix bundle plugin to maven (remember to declare project bundle packaging with <packaging>bundle</packaging>):


          included resources folders (to be later referred):
          maven resources and JSP files

          resources available from within bundle
          (not available as CRX nodes):
          compiled .class files and JSP files.

          content initially copied into CRX nodes:
          properties as JSON descriptors
        CQ-INF/initial-content/apps/myApp/; overwrite:=true; path:=/apps/myApp,
        CQ-INF/initial-content/content/sample/; overwrite:=true; path:=/content/sample

this should be enough to create a package with mvn clean pakage. we’re almost done..

step three: installing the bundle. ingredients: maven sling plugin.

with CQ there are two ways to install a bundle: put it under /apps/myApp/install folder or using the Felix console. i choose the latter, which turns out to be a plain POST request to the console URL. anyway, we can hook the maven build chain with the Sling plugin, this way:


just type mvn install and we’re done.

that’s it. a lot of setups, expecially if, like me, you’re new to maven and OSGI. anyway, i’ve written this mainly for later reference and to share thoughts with colleagues. i’ve shown three approaches to develop with CQ, tested in my daily work on the last month. in my view, deploying OSGI bundles is the best one, so far; it’s a trade-off between ease of use while debugging (yep, no UI automatic tests yet) and development lifecycle (versioning, building, packaging). i hope to gather much more info next year, and probably something will be easier! next step will be setting up automatic tests for JSP files, using Koskela’s JspTest tool.

sample code is here: please, follow README and have fun.

well, happy new year to everyone!


November 6 2009

i’m back.

for the last two weeks, i’ve been staying in the lovely city of Amsterdam, working for a customer of my dutch colleagues. challenging, amusing, funny and resource-consuming, here’s a brief recap of my last 15 days.

first of all, thanks from the deep of my heart to Maurizio “daje forte” Mao Pillitu, for hosting me in his nice and comfortable home, just outside the city town. he’s been very kind and friendly, i hope i had in some way paid back with my italian-style cousine.

so, i’ve been working for Hippo, a young and energetic open-source company born around their CMS product: it’s a nice building down-town, just 15 walking minutes far from Dam square (yep, i loved walking through the city lanes after a full day of working). guys at Hippo are friendly and passionate, devoted to open-source; they also organize forge-fridays, sort of coding dojos with the focus on releasing working plugins (for Hippo CMS, of course) at the end of the afternoon.

Hippo CMS is having a lot of popularity among public institutions in the Netherlands, something my dutch colleagues have been working on hard also. but even if Hippo 7 is getting popular, there are still a lot of projects done with the older product version, Hippo 6. And that’s were my story begins.

i’ve been working for the municipality of Schijndel, a little dutch town, helping its IT management improve and automate meeting’s agenda and reports publishing. yeah, you heard it right: they record and publish (with a little delay, of course) audio and text content for every council’s meeting. being an italian citizen, all that transparency and devotion sounds strange, but is really laudable.

the first challenge i faced was, of course, translating all documentations from dutch to english, from analysis PDF to past emails with customer. i didn’t had everything clear at first, but thanks to double-checking with dutch colleagues i finally got it. (anyway, it’s funny almost every translation from dutch gets verb in the very last part of sentences. it really reminded my latin classes, while at college).

then i finally entered the dark tunnel: technology viscosity and indecent web of dependencies, also known as Maven 1. gosh, i really had to work hard to have a successful build on top of Java 1.4, Axis2 and Cocoon 2.1, which turned out to be classpath monkey-patching, using ant tasks, jelly scripts and maven postGoals. damn!

add lack of support from webservice’s developers and consultants, and the soup is ready to be served! in fact, i just had a working test environment (i mean, representative of customer’s one, with valid data) almost 3 days before the project scheduled end. that’s awesome, isn’t it? how did the hell i managed to get the work done?

applying what i later called the “abstract and adapt” strategy: understand the domain, abstract from implementation details, then adapt code when things get clearer. well, that’s the hexagonal architecture (but, you know, we like coining sexy names). so, i spent the whole first week coding the application logic decoupled from real system behaviour, which in fact was unknown. Agenda and its Repository, Content and Storage, Indexer and Importer, these are all roles i’ve been writing, test-driven, from day one. that’s not easy, and of course it’s risky; but it was the best i could do.

reading webservice specifications and WSDL, i could also guess how that slimmy layer should behave, but i really got it wrong at first! then, i had an ah-ah moment during the first weekend, and changed the webservice adapter in order reflect my new thoughts, without the need to modify domain logic so much (in fact, i also improved my domain knowledge). i changed unit tests, and added sort of spikes: tests with no assertions, just logging actual parsed responses, so that i could “see” with my eyes current webservice behaviour, at each test run.

and i was right! i clearly remember how shocking was reading in the console log some parsed data, when they finally were set up on test environment! you know, i was going for lunch, i ran all tests one more time, before locking down workstation, and i saw that: “parsed 6 agenda”, following by a so-nice full toString(). that was awesome, really: my tests told me setup was done before receiving a confirmation email by consultants, 30 minutes later!

than, i had my journey to Schijndel, to discuss deployment and testing on customer’s network. trip took 2 hours, i also had a 30 minutes stop in ‘s-Hertogenbosch which i spent walking down-town, among nice gothic buildings and golden dragons.

it’s shocking how efficient dutch national transports website is, with its door-to-door journey planner, really. well, it’s a shame it’s not updated with temporarily moved bus stops, which could have saved me one hour in the late evening!

anyway, that’s it, a recap of techy stuff mixed with journey reports. thanks to the whole dutch office for the opportunity and drinks, looking forward to next works together!

It seams open

May 4 2009

i can clearly remember when i first discussed DIP and OCP with others: it was two years ago, during my apprenticeship as an XPer. to me, it was nothing new, i already had studied all the principles that now come under the SOLID ancronym. but, probably, i hadn’t digested them enough: something that just came later with experience.

Dependency Inversion was for sure my favourite since then, for its multiple implications depending on what “high level” and “low level” mean; in my view, there are at least two meanings: abstraction (high level policies vs. low level details) and layering (close-to-user layer such as GUI vs. infrastructure layer). much more, i loved its love-hate with Dependency Injection (maybe more on this in a separate post).

Open-Closed is harder to understand, at first. how could you “change without modifying”? abstraction is the key! let modules depend on abstraction, then provide new implementations when behaviour has to change, without the need to modify existing code. in other words, always depend on abstraction: which, in the end, is DIP itself. OCP and DIP are such “yin and yang” in software design: they help achieve each other.

then, when i first discussed OCP with team-mates, i pushed for an analogy with Feather’s Seam Model. it’s discussed in “Working Effectively with Legacy Code” book: use seams to let legacy (which means untested) code be tested. to be fair, my analogy was not welcomed too much! i had to force my thesis a bit, and in the end not everybody was convinced.

two years later, it happened again! indeed, a few months ago we had a study group on OCP. i was in charge of preparing material to study on, and i chose a few corollary articles: first chapter from GoF Design Patterns book, which focuses on “design targeting interfaces”, and WELC chapter 4 “the Seam Model”. this time i was more convincing, and the analogy between OCP and the Seam Model became clear during our study session! and now, i want to tell you too.

after reading the bunch of articles i prepared, i asked a colleague to state in a few words what OCP was about. he said “change behaviour without modifying code”. great! then, i asked him again to state what the Seam Model was about, and he said “let code behave in a different way without modifying it”. well.. nothing left to say!

abstraction is the key, that’s true. but what about code which doesn’t follow OCP/DIP? it doesn’t depend on abstraction. we can modify it, refactoring, but we need an automatic test suite, in order to guarantee no behavioural change. and that’s exactly what Feather’s model is about: change code a little bit putting seams, to test it in isolation.

on the other side: what seams can you use to test OCP-compliant code? of course, already existing abstractions: you just have to change enabling points (in a test, usually in fixture setup).

in the report for our study session, on our internal wiki, we wrote:

We then discussed what OCP and Feather’s Seam Model have in common:

  • they seem the same idea, applied to reach different goals
    • OCP: put abstractions to isolate from future source code changes
    • Seam: put abstractions to test applications without changing its source code
  • to recap
    • closure/abstraction = seam
    • “main() routine”/factories = enabling point

to be precise, Feather’s model is about three different techniques, useful for testing legacy code written in any language, not just object-oriented ones. he talks about preprocessing seam, linking seam and object seam. so, the analogy with OCP is just between abstractions and object seams, even if sometime linking techniques are also used to achieve abstraction (such as reflection or some configuration-based IoC tool).

so, when i watched Misko‘s clean-code-talks videos, i was surprised to hear him use the term “seam” while talking about DI and SOLID principles: he confirmed my analogy have sense!