现在的位置: 首页 > 综合 > 正文

Methods, Not Methodology (3): Knowing Everything at the Beginning?

2012年09月23日 ⁄ 综合 ⁄ 共 3081字 ⁄ 字号 评论关闭

Knowing all details at the beginning is considered impossible, or not necessary, so BDUF (Big Design Up Front) is considered harmful.

It's usually true, but in some cases, we do need to know details as more as possible. Samples include bug fixing without automated regression test, refactoring on a legacy code base, etc. The assumption for "BDUF is harmful" is uncertainty and related potential waste. But things like bug fixing and refactoring are something more certain. They're working on a code base which is already there, towards to a target which is more clear. From the following perspectives, we'd better to know more before we start:

  • Lack of feedback and protection from automation test, so it's most likely to break something.
  • Need to understand the risk, so we can have a targeted test plan.
  • Need to understand the effort, so we can have a reasonable schedule.

So, the question is, if it is necessary, how could we know everything at the beginning?

I don't think there's a universal solution for this question. I will just focus on some heuristic methods for some specific scenarios, like bug fixing and refactoring.

The idea here is impact analysis. There're at least 2 methods we can leverage.

Impact Analysis through Code Reference

It was explained by Michael Feather in his book Working Effectively with Legacy Code. Basic idea is if we know we will change a specific code snippet like a method, we could search the callers of this method, to see how do they fill the parameters and use the return value. It's recursive process. If you reached the top level code like user interaction for each call stack, you'll get the impact scope.

Now we have so many intelligent IDEs like Intellij and ReSharper, the impact scope could be shown by pressing a short key.

Impact Analysis through OFM (Object Feature Mapping) Diagram.

This is the new method I want to introduce. I've seen domain object diagrams to help to understand the design of software in many projects, also the feature diagrams to help understand the big picture of the product. The feature diagrams are also helpful for regression test. But there's no diagram to show the relations/mappings between domain objects and features, which would be helpful for impact analysis.

Good design is loosely coupled and highly cohesive, so this kind of mapping is not necessary. The impact is always localized. But for most legacy code, a single domain object could be coupled with hundreds of features.

Well, a legacy system may lack design so there's no "domain object" at all. But every system would process some data. Listing the data-feature-mapping is more suitable. Like the table below:

.

Authentication and Authorization

Project Assignment

Recruiting Demand Analysis

Relocation Arrangement

User Credentials

Employee Skills

Employee Locations

Employee Schedules

Project Demand

When we need to introduce changes to "Employee Locations", we'll know it will impact 2 features like "project assignment" and "relocation arrangement" at least.

The table could be huge for a large system. Again, the purpose is just to reminder the team not forgetting something. So it's not necessary to be comprehensive. Highlight the most important parts.

Recognize the Cross-cutting Changes.

Things like internationalization/localization, look & feel, logs, license, transaction, data migration, audit trail, etc., are cross-cutting: They will impact all the features. List them on a wiki page or a A4 paper, refer to them for impact analysis.

Other blogs of the Methods, Not Methodology series:

抱歉!评论已关闭.