The Most Important Idea for Software Delivery Teams

What are the most important ideas for a software delivery team to discuss?

Later today, I will be attending a meeting about software estimation. We will discuss if estimating is essential and figure out the right level of estimation to do for our teams. Overall, reasonable goals and a good approach. But these are not the most important topics for a software delivery team to be discussing.

Discussing technical practices is an important idea for software teams. Are we going to use TDD or BDD? Can we deploy this in a serverless environment? Should we ship straight to production? What was our MTTR over the last quarter? Are we deploying often enough? Do we build our telemetry solution? These are valuable ideas that get team members excited about their work. And these are not the most important ideas.

Discussing process is a significant component of building a software team. Should we use Scrum, XP, or Kanban? Can we improve our practices? How many meetings do we need? When is the next retrospective? Which format is best for writing user stories? Is our daily stand-up efficient? There is value in each question. And these are not the most important ideas.

Discussing business goals is vital for software delivery teams. How many downloads did we get in the app store? What is our drop off rate? What is our CAC/ATV ratio? What’s the burn rate next month? What are our OKR’s, and do they support our MIT? These are valuable ideas. And they are not the most important idea.

The most critical question a software team can discuss is, “Are we making things a little better for our users?” Are they a little happier, a little more productive, a litter better off because of your team? Sometimes that may be through software and the systems that we build. Other times, it may be by simplification by removing features that can eliminate an entire class of problems.

Many times, we don’t know. Then, we depend on the technical practices to help us figure things out. We lean on goals and vision to come up with concepts that we can explore. We put ourselves in the users’ shoes. We lean on our process and our feedback loops. We know a little more.

Is the user better off because of your efforts? Address that question, over and over. The solutions may not even involve software delivery.

Reusable Classes

Today, my pair commented on how we have gotten a lot of use from our test-helper classes. How true, how true. Over the course of the last year, we have created the classes needed in by our application, and the tests to support those classes. But we didn’t stop there. We also created many small utility classes to support our tests. These classes are barely worth writing, or so it would seem. They are actually priceless.

We have gotten a lot of re-use from these classes because they are small and concise. They each address and encapsulate one sole responsibility. The classes usually consist of two or three methods, and are no longer than about twenty lines. It’s in this precision that they find their utility. Being small and to the point, it is very easy to pick the right class when needed.

We’ve developed classes to set and reset system properties; to set and reset logging levels; to read and write private fields; to set id’s on our domain models; and many other small responsibilities. At first, when developing these helpers, we would wait for a duplicate need to appear, and then factor out a re-usable component. As we got better at this, we wouldn’t wait for a duplicate need, but instead would factor out responsibilities as we saw them appear. This helped us get even more utility from these classes. As soon as the classes were created, uses for them would appear.

By encapsulating concepts, keeping concepts small and concise, and identifying concepts early, you will increase your chances for re-use, and get a lot of mileage from your classes.

My takeaways from this experience:

  • encapsulate responsibilities
  • create small classes
  • factor responsibility early.

Hindsight Is 20/20 Even Several Years Later

I ran into an old colleague, Alex, on the train today. We went through the usual catch up conversations, and got into talking about some of the old projects we had worked on together.

Alex mentioned that an old application that I had worked on along time ago was still being used, and the code still maintained. Our project was a banking web site, and this application was a supporting application that was used to verify and enroll users into the main application, which was a java web application. In typical hindsight is 20-20 fashion, we started to comment on how we might have approached that application differently.

To appreciate what we would have done differently, I guess I owe a little background on the application. The original enrollment application was a Lotus Notes application, and it verified the enrollee information against the account database. When lookups against the account database started having performance problems, we decided to “quickly” fix the problem by replacing the lookup with a PowerBuilder application. After some modest success improving performance, we decided to fully re-write the enrollment application in PowerBuilder. The application was written along-side the main application, with little design, little visibility and little testing. Despite that, it was deployed on time, and it worked.

Looking back, I would have made this a full-fledged and visible project. The enrollment application, by its nature, has a lifespan that matches that of the main application. This alone should have been consideration enough to give the enrollment application more visibility.

Alex mentioned that maintenance of the application was passed around from team to team. In hindsight, I would have implemented the enrollment application using the same technologies as the main application. The enrollment application was written in PowerBuilder, by a couple of team members, while the rest of the team was working on a Java web application. This would bring the technical skill set in line with the main application, improving the chances of maintaining the application.

I mentioned increasing the visibility. To me this means two things: more project management visibility, and more testing. Over the years, I’ve seen that detailed visibility into the details of a project, and into the true status of tasks, leads to more successful projects. In part, because developers tell it like it is, and in part because managers can see the details for themselves. This outcome of the original project may not have changed, since the application worked, but the process of getting there would have been much smoother. The same goes with more testing; the outcome may not have changed, but the project would have gone smoother.

Finally, when the PowerBuilder application was first developed, it was thought that this would be a short-lived application, to help with performance for a short time, and then the application would be retired. Of course, it is still not retired. In hindsight, I would have approached even the smallest effort with the same rigor as an application planned for a long lifespan. You never know when an application will take on a life of its own, and outlive everyone’s expectations.

Where Have I Been

I need to thank my friend Tom for getting me back to writing. He’s doing a craftsman spike to refresh his learning process. Talking to him got me back on track about documenting my own learning. I haven’t been documenting lately because I’ve been studying. I’m 3 classes away from getting my Master’s in Software Engineering at DePaul University.

I started looking into this a couple of years ago while in a rut. I wasn’t inspired by my work. The problems were interesting, but I wasn’t. Instead of solving problems, I spent a lot of energy trying to muster up the inspiration to get into the problem. Up to that time, all of my programming experience had been on the job, so I thought that some formal study would jump-start me. A year and a half later, that was a good decision. I’ve learned things about the how software is developed, as well as how software can be developed. Best of all, I again look forward to getting up and going to work; to solving problems; and to learning a little more each day.

To give myself a best chance to succeed, I will need to combine all of what I’ve learned on the job, everything I’ve read along the way, and what I’ve learned in my course work. There are some great SE practices that can help move your project forward, as there are some great practices and trends from the field, along the lines of agile methods. It’s to cross-pollinate some of these ideas, to mix and match practices and techniques, to upgrade our toolbox for development. I plan to share some of my ideas on this….soon.

Keeping It Simple

A teammate asked for help on a problem he was having. He was using Hibernate to retrieve data from a table, and wanted to retrieve a character column (Y,N) as a Boolean. His first thought was to the @Type annotation. He also considered setting the hibernate.query.substitutions property.

When he asked me for help, both of these solutions seemed quite complicated for his problem. I asked about retrieving the data as a String, but he said he tried and that it did not work. He was stumped. I was skeptical. I was sure there was a simple way to get this to work.

He changed his model to represent the data as a string, reloaded the system, and showed me the error he was getting. When we followed the stacktrace back to the line that was causing an error, I noticed that the model being retrieved was not the one he as changing. He had two similarly-named models, and was changing one while testing the behavior of the other.

These questions came to mind:

  1. Why wasn’t he pairing while working on this task?
  2. Why were the models similarly named?
  3. Why did he need to reboot the system? Why wasn’t there a quick integration test written?

Respect Your Unit Tests

Our team just completed a major release of our software. It’s been three months since the last release. This is a very long time based on our usual pace of 2 to 3 weeks, so we could expect to have some issues after deployment. Had we shown some respect for our unit tests, we would have fewer issues.

One of the problems we had was related to some external libraries we were using, which required our models to use conform to the JavaBeans spec ( having both a getX() and a setX(X x) method). In some cases, the getX() was the only meaningful method, and the setX(X x) was only put there as a placeholder to conform to the API. If the setX(X x) had a nice comment explaining it’s purpose in life, we were safe. But sometimes, we had the stub setX(X x) method, without an explanation of why it was there. And upon doing some reference checks, we would quickly determine that it was not used by any code, and could be removed.

That’s just what happened in this case. A developer noticed the dead code, asked around if it was used, got concensus that it wasn’t and proceeded to remove it. He ran the unit tests, fixed a couple of broken tests, and checked in his changes.

He removed un-used code, and then had to fix some unit tests.

And when we released the code to production, we had problems.

The fact that the unit tests failed and needed fixing should have been the indication that the code was used, even if not explicitly referenced by any of our code. The fact that we had production problems confirmed it. The unit tests were in place to prevent this error. But the developer bypassed that warning.

The unfortunate fact is that it is very easy to learn from a mistake. But it’s these mistakes that we need to file away as experience, so that we get better at what we do. Take care in writing unit tests, so that they are meaningful and accurate. And then take greater care when modifying them. When in doubt, trust that the unit tests are right. Grab a pair; get a second opinion; get a third opinion. Let the unit tests serve as one of your safety nets when making changes to code. And respect your unit tests. They don’t lie.

Back to the Basics of Refactoring

I ran into a couple of separate bugs the last couple of days, that were caused because of some refactoring. How can that be, you ask? I asked myself the same thing! By definition, refactoring should change the structure of the code, without changing the behavior. Yet, in both cases, the behavior did change. Let’s see if there is a way to prevent this from happening, and keep with the true nature of refactoring.

Refactoring defined

Martin Fowler defines refactoring like this:

Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior. Its heart is a series of small behavior preserving transformations.

The key point I see is “small behavior preserving transformation.” A refactoring is a small change, and it does not change the behavior.

Preserving Behavior with Unit Tests

The best way to ensure that you are preserving the behavior, is to have a suite of fast and accurate unit tests. Yes, I am asking alot here.

The unit tests need to be fast, so that developers do not feel bogged down by having to run them. When they are fast, developers can run all unit tests many times throughout their development cycles.

The unit tests need to be accurate so that they go so far as to specify the intended external behavior of a module (or class, or method, or whatever your “unit” is), yet they are flexible enough so that they do not specify the internal structure. This is part science and part art, and is well beyond the scope of this discussion. But the goal is to specify behavior, not implementation.

So why did these refactorings go south?

We all know the theories of modern development (everything is tested, pair programming, the simplest thing that works), yet it can be hard to put them into practice.

One of the refactorings I encountered was very subtle in how it broke down. The developer ran all units tests, they passed. Then he made the refactoring. Then he ran all tests again, and they passed again. So he was done, and went on to the next task. However, the tests passed because the code in question was not covered by tests.

So let me restate my thesis. The best way to ensure that you are preserving the behavior, is to have a suite of fast, accurate, and complete unit tests. There, that should do it.

When unit tests are not enough

The other refactoring was a bigger change. The goal was to put the soft into the software. We wanted to take some data that was hardcoded into the application, and move it to the database, so that changes became configurable. The problem was that we had three scenarios, but two of them shared the same hardcoded implementation. So when we went to a configurable implementation, we lost one of the scenarios.

This problem could also have been avoided by tests, though maybe not what is considered a “unit” tests. To catch this problem as soon as it happened, a broader suite of test would be needed: integration tests, functional tests. The key, though, is that these tests should be broad enough to cover the scope of the refactoring, fast enough to be able to be run by the developer, and complete enough to have tested (and exposed) all scenarios.

One last time. The best way to ensure that you are preserving the behavior of your code while refactoring, is to have a fast, accurate, complete and broad suite of tests. Fast enough to run often, accurate enough to specify the behavior, complete enough to test what you are changing, and broad enough to cover your code base at different levels of granularity.

Conclusion

Refactoring improves the code base. Refactoring simplifies the code, makes it easier to comprehend, and easier to change. No arguments there. The key, however, is that refactoring preserves behavior, and the only way to ensure that behavior is preserved is to have a broad and accurate suite of tests around the code.

Stated another way, the end user should not be able to tell immediately that you have completed a refactoring. Instead, they should be somewhat impressed when you quickly deliver the next feature that they request.

A Quick Review of Debugging Struts Applications

I had worked on a Struts web application a couple of years ago, and within our team I am still considered the “expert” on that application. So yesterday, when something wasn’t working correctly, a teammate approached me and asked for help.

As I walked her through what was going on, I made it a point NOT to rely on any knowledge of the application (after all, it has been two years and several maintenance programmers since I have worked on the application.) Here’s what we did:

  • Based on the URL, track down the action mapping
  • Look in the struts-config.xml file and find the JSP that is rendered
  • Examine the JSP and see where the data is coming from; identify the form object that holds the data
  • Back in struts-config.xml, find the action that does the work
  • In the action, look at how the form is populated

In finding and resolving the problem, I didn’t have to use any tacit knowledge of application. Instead, we just ran the application, identified the problem, and tracked it back to the code that was causing the problem.

A Quick and Easy Way to Avoid Nullpointerexceptions

In Java, we often see code that compares a particular value to some know constant. Often this is written like this:

someObject.getSomeValue().equals("SomeConstant");

This works ok, assuming that someObject is not null, and as long as you are sure that getSomeValue() will never be null.

If you aren’t so sure, or if you just want to develop a good habits that will minimize the number of NullPointerExceptions you run into, you may try to write the same comparison this way:

"SomeConstant".equals(someObject.getSomeValue());

You are ensuring that you will not run into the dreaded java.lang.NullPointerException, because your constant value will never be null. And you are improving your own productivity, because you and your teammates will spend less time tracking down and fixing NullPointerExceptions.

Glimmer to Eclipse Rubyonrails

When I hear about Ruby, the first thought that comes to mind is Ruby on Rails and Web 2.0 applications. I would have never made the association from Ruby to desktop application. Until now. About a year ago, it was suggested that JRuby and SWT might be a viable combination for Ruby on the desktop. After all, SWT is the performant, native desktop library available from Eclipse, and Ruby gives you many productivity advantages. There was even a SWeeTgui project at the time, though it doesn’t seem like there was much traction.

Fast-forward one year, and we now have Glimmer: “a JRuby DSL that enables easy and efficient authoring of user-interfaces”. What advantages are there with Glimmer? Here’s what I see:

  • A compact api that allows Ruby developers to write native destop applications
  • A clean wrapper around the SWT libraries, that takes a minimalist approach by exposing the most important features and applying smart defaults everywhere
  • An API that is based on Ruby’s programming paradigms, not Java’s.
  • The ability to implement complex SWT desktop applications with only 25% of the code.

That last point is what brought me over. Being able to write the same functionality with just a quarter of the code (and time). I’ve been developing Java applications for nearly a decade now, and using SWT for two years, and I feel very comfortable. .

When I first saw Glimmer, I didn’t believe that I needed it for my Java SWT applications, because I know Java, and I know SWT. But as we discussed the merits of this API, and I saw a demonstration of some complicated user interfaces, I got a “glimmer” in my eye. I could see alot of productivity benefits here.

Take a look for yourself, and consider it for your next desktop project.