Lessons learned about upgrading from NUnit 2.x to 3.x

Our current project has been spurred to move from NUnit version 2.6.4 to version 3.6 for .NET Core compatibility.  Along the way we learned about some key differences, some minor, some more interesting:

Ignore attributes

With NUnit 3.x, [Ignore] attributes must specify a reason. So, everywhere we had this before a test method:

[Ignore]

we adjusted it to:

[Ignore("Reason")]

ExpectedException attributes

We had been using the ExpectedException attributes to evaluate proper error handling.  Here is a typical example:

        [Test]
        [ExpectedException(typeof(OurCustomException), ExpectedMessage = "Some Exception Text")]
        public void Method_ThrowsExpectedException()
        {
            .
            .
            .
            testedObject.TestedMethod();
        }

The ExpectedException attribute is no longer supported.  Instead, there are a couple of new ways to evaluate Exceptions: 

  • ex = Assert.Throws<ExceptionType>(() => testedObject.TestedMethod())
  • Assert.That(testedObject.TestedMethod(), Throws<ExceptionType>)

In our case, we wanted to test, not only the Exception Type but also the error message.  So, we implemented the first option.  Here is an example:

        [Test]
        public void Method_ThrowsExpectedException()
        {
            .
            .
            .
            ex = Assert.Throws<OurCustomException>(() => testedObject.TestedMethod());
            Assert.That(ex.Message, Is.EqualTo("Some Exception Text");
        }

NUnit no longer sets work directory to \bin folder

At times, our tests count on finding certain files within the project folder structure.  For example, some logging features normally write to a standard logging folder, but in a test context we point them to write to a folder in the project structure.  This behavior depended on the test runner changing the working directory to the \bin folder, which NUnit 2.x used to do.  NUnit 3 doesn't do that anymore.  So, we need to change the working directory as a setup step in order for the existing tests to continue to function.  Since, the affected test projects were Specflow-based, we added this code in one of the step files:

        [BeforeTestRun]
        public static void TestRunSetup()
        {
            var dir = Path.GetDirectoryName(typeof(CurrentClassName).Assembly.Location);
            Environment.CurrentDirectory = dir;
        }

Developing and Deploying a .NET Core Lambda to AWS

Yesterday's exploration brought us to the point where we had a working .NET Core feature integrated with unit and functional tests.  Today, we will focus on wrapping that feature up as an AWS Lambda and automatically deploying it.

Our Lambda project (QueueMover.Lambda) was created using the AWS Lambda Project template provided as part of the AWS Toolkit for Visual Studio 2015 (installed via Extensions and Updates).

The fundamental architectural concept is to put as much code as possible in the core project that is protected by tests; then, create a extremely thin Lambda wrapper around that code.  It should only be responsible for receiving any event and context parameters, taking care of dependency injection, and calling the core feature.  Here is the code:

 public async Task MoveHandler(ILambdaContext context)
        {
            var configSettings = new LambdaConfigSettings(context);

            var amazonSqs = new AmazonSQSClient(configSettings.SqsAccessKeyId, configSettings.SqsSecretAccessKey,
                configSettings.SqsRegion);

            var queueMover = new QueueMover(configSettings, amazonSqs);
            await queueMover.Move();
        }

Note the LambdaConfigSettings class.  It is based on the interface IConfigSettings.  This allows the unit tests to mock the configuration settings; and, the functional tests can use a standard app.config file wrapped in a FileConfigSettings object based on the same IConfigSettings interface.  Now, when we integrate the QueueMover.Move feature into a Lambda, we can provide the same configuration settings using the Environment settings contained in the context parameter.

We use those settings to set up an Amazon SQS client and pass these along to the core feature.  

That's it.  That's all the code we have outside of the core project, unprotected by tests.  Having very little code in the Lambda wrapper is a very good thing, since debugging a deployed Lambda is hard.  

Deployment

Now that the Lambda is complete, doing a manual deployment is straightforward.  Deployments can also be accomplished from the command line.  In either case, it is important to understand the role of the aws-lambda-tools-defaults.json file that comes with the Lambda project template.  

{
    "profile"     : "MyProfile",
    "region"      : "us-west-2",
    "configuration" : "Release",
    "framework"     : "netcoreapp1.0",
    "function-runtime" : "dotnetcore1.0",
    "function-memory-size" : 256,
    "function-timeout"     : 30,
    "function-handler"     : "QueueMover.Lambda::QueueMover.Lambda.LambdaHandlers::MoveHandler",
    "function-name"        : "TestQueueMover",
    "function-role"        : "arn:aws:iam::458256242382:role/service-role/execute_my_lambda",
    "environment-variables" : "\"SourceQueueName\"=\"QueueMoverSource\";\"TargetQueueName\"=\"QueueMoverTarget\";\"SqsAccessKeyId\"=\"NotGonnaTellYou\";\"SqsSecretAccessKey\"=\"NopeNotGonnaDoIt\";\"SqsRegion\"=\"us-west-1\""
}

This file has a role for both manual and automated deployments.  When you use the interactive Publish to AWS Lambda... feature, the fields are pre-populated with the values from this file.  When you use the command line interface, it will use these values unless you override them with command line parameters.  (To see your parameter options, type dotnet lambda help deploy-function in your Lambda project directory.)

Here are some notable settings from the defaults file:

 "function-handler"     : "QueueMover.Lambda::QueueMover.Lambda.LambdaHandlers::MoveHandler",

This is how you tell AWS Lambda which method to invoke.  It is made up of four parts:

  • Assembly name (i.e. project name)
  • Namespace (May be the same as the assembly name if your project isn't complex enough for a namespace heirarchy.)
  • Class name
  • Method name
"function-name"        : "TestQueueMover",

This is the name of the Lambda function that you will see in AWS.

"function-role"        : "arn:aws:iam::458256242382:role/service-role/execute_my_lambda"

This is an existing AWS IAM role.

"environment-variables" : "\"SourceQueueName\"=\"QueueMoverSource\";\"TargetQueueName\"=\"QueueMoverTarget\";\"SqsAccessKeyId\"=\"NotGonnaTellYou\";\"SqsSecretAccessKey\"=\"NopeNotGonnaDoIt\";\"SqsRegion\"=\"us-west-1\"

Use these environment variables in place of your typical .config file.

By setting this file up with the standard values you will use, your automated deployment scripts/tools can simply override the settings that are variable.

For example, I created this Powershell script:

dotnet lambda deploy-function -fn "DLQ_Reprocess_ReleaseDelta" -fh "QueueMover.Lambda::QueueMover.Lambda.LambdaHandlers::MoveHandler" -ev '"SourceQueueName"="ReleaseDelta_Dev_DLQ";"TargetQueueName"="ReleaseDelta_Dev";"SqsAccessKeyId"="NotGonnaTellYou\";\"SqsSecretAccessKey\"=\"NopeNotGonnaDoIt";"SqsRegion"="us-west-2"'
dotnet lambda deploy-function -fn "DLQ_Reprocess_ReleaseStatusChange" -fh "QueueMover.Lambda::QueueMover.Lambda.LambdaHandlers::MoveHandler" -ev '"SourceQueueName"="ReleaseStatusChange_Dev_DLQ";"TargetQueueName"="ReleaseStatusChange_Dev";"SqsAccessKeyId"="NotGonnaTellYou\";\"SqsSecretAccessKey\"=\"NopeNotGonnaDoIt";"SqsRegion"="us-west-2"'

It deploys the same QueueMover.Move function twice.  Each instance has a different name and different environment settings for the appropriate queues.  Running this script automatically sets up two Lambdas.

Of course, we need a way to change these settings to be appropriate for each deployment environment.  For this, we could make the PowerShell script accept parameters.  And/Or, we can use our deployment tool Octopus Deploy to help.  That is for a future discussion with our deployment folks.

Integrating a .NET Core project in a TDD/BDD environment

We have been exploring the use of Amazon AWS Lambdas on our project.  Since we have a large investment in our C# codebase, we were quite interested when Amazon announced C# support for developing Lambdas, albeit only with .NET Core support.

As a pilot, we have chosen a very simple feature: moving messages from one queue to another.  It's not just a throwaway effort - we want an means for moving items from a Dead Letter Queue back into the main queue for reprocessing.  But, this effort will mainly let us explore what is involved in developing a .NET Core Lambda as part of a production project.  We want it integrated into a project with TDD and BDD tests, with automated deployments.

My first day focused on the .NET Core aspects of the project, not touching on Lambdas yet.

Incorporating the .NET Core project requires a good understanding of target frameworks.  And, that requires grasping the role of the .NET Standard framework.  

.NET Standard framework is not an actual library of code, whereas the classic .NET frameworks and .NET Core are. You could say that .NET Standard is to .NET Framework and .NET Core as an interface or abstract class is to a concrete implementation.  The .NET standard versions define a set of supported libraries that other frameworks implement.  When your project targets a .NET Standard version, it corresponds to specific versions of the related frameworks.

Here is a helpful metaphor from David Fowler, likening the relationships between target frameworks to interfaces with inheritance relationships.

Here is the authoritative table showing the relationship between .Net Standard versions and related frameworks.

A couple of effective ways of thinking about .NET Standard framework versions:

  • The higher the version, the more APIs are available to you.
  • The lower the version, the more platforms implement it

Another way of expressing it:

  • App Developers: You target the platform TFM you’re writing for ( netcoreapp1.0, uap10.0 ,  net452 ,  xamarin ios , etc.).
  • Package/Library Authors: Target the lowest netstandard version you can. You will run on all platforms that support that netstandard version or higher

So, given that guidance, I settled on this strategy:

  • Target .NET Standard 1.6 for the project with the core functionality (QueueMover).  
  • Target .NET Core 1.0 for the a project to provide a thin wrapper for implementing a Lambda (QueueMover.Lambda)
  • Target whatever framework is appropriate for the test projects (QueueMover.Tests.Unit and QueueMover.Tests.Functional) as dictated by the tools (and, there are constaints, as we will see).

So, let's look at the project properties for the arrangement I had working at the end of the day:

QueueMover (Core functionality)

{
  "version": "1.0.0-*",

  "dependencies": {
    "AWSSDK.SQS": "3.3.1.6",
    "NETStandard.Library": "1.6.0"
  },

  "frameworks": {
    "netstandard1.6": {
      "imports": "dnxcore50"
    },
    "netcoreapp1.0": {},
    "net46": {}
  }
}

A few interesting things to note here:

"dependencies": {
    "AWSSDK.SQS": "3.3.1.6",
    "NETStandard.Library": "1.6.0"
  },

Targeting .NET Standard 1.6 results in a library compatible with .NET Core 1.0 and .NET Framework 4.6.  The only other dependency for this simple project is the AWS SQS SDK.

 "frameworks": {
    "netstandard1.6": {
      "imports": "dnxcore50"
    },

Again, targeting .NET Standard 1.6.  The "imports": "dnxcore50" enables integration with some NuGet packages that have not been upgraded to use current Target Framework Monikers (TFMs).  See https://github.com/aspnet/Home/issues/1540

    "netcoreapp1.0": {},
    "net46": {}

These are crucial to having this .NET Standard 1.6 project cooperate with other projects in the solution.  Each one of the "frameworks" entries causes the project to generate DLLs for that target framework.  (Look in folders under \bin\Debug).  The "netcoreapp1.0" was essential for integrations with the QueueMover.Lambda and QueueMover.Tests.Units projects.  "net46" was necessary for integration with the QueueMover.Tests.Functional project.

QueueMover.Tests.Unit (Unit Tests)

{
  "version": "1.0.0-*",

  "dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0"
    },
    "AWSSDK.SQS": "3.3.1.6",
    "Moq": "4.6.38-alpha",
    "NUnit": "3.6.0",
    "dotnet-test-nunit": "3.4.0-beta-3",
    "QueueMover": "1.0.0-*"
  },

  "testRunner": "nunit",

  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }
  }
}

Several things to comment on here:

"dependencies": {
    "Microsoft.NETCore.App": {
      "type": "platform",
      "version": "1.0.0"
    },

Due to some TDD-related tools (discussed below), this project needs to target the .NET Core 1.0.  Using "type" : "platform" indicates that we are counting on that dependency existing in the target environment.  See this which includes this helpful quote:

The type "platform" property on that dependency means that at publish time, the tooling will skip publishing the assemblies for that dependency to the published output.

So, this means that .NET Core needs to be installed on the developers machine and in the Continuous Integration environment.  Seems like a reasonable expectation and will dramatically reduce the deployed package.

"AWSSDK.SQS": "3.3.1.6",

Needs the AWS SQS SDK to set up mocks for AWS services.

    "Moq": "4.6.38-alpha",

.NET Core support in our standard mocking framework, Moq, is a work in progress.  Version 4.6.38-alpha was sufficiently stable for my needs.

    "NUnit": "3.6.0",
    "dotnet-test-nunit": "3.4.0-beta-3",
    .
    .
    .
  "testRunner": "nunit",

NUnit support for .NET Core seems pretty good right now.  In addition to the standard NUnit NuGet package, you need the "dotnet-test-nunit" NuGet package to install a compatible test runner.

    "QueueMover": "1.0.0-*"

A reference to the project we are testing.

  "frameworks": {
    "netcoreapp1.0": {
      "imports": "dnxcore50"
    }

Again, targeting .NET Core 1.0 only, for Moq and NUnit compatibility.  Nothing will depend on the unit test project, so no need to target any other frameworks.

This results in a workable unit test project.  'Workable' but not what we have become accustomed to on our project.  Two key tools have been left out:  

  • NCrunch.  Support is not there yet.  The developer says 'Maybe next month'
  • Resharper. Resharper 2016.3 has support for .NET Core.  I didn't take the time to upgrade yet.  For now, I have been content with the Visual Studio test runner.

(Note: I'm not sure if there is a crucial advantage to having the unit test project based on .NET Core.  Alternatively, it could be set up as a traditional .NET Framework class library (see below re: QueueMover.Tests.Functional).  To be explored later...)

QueueMover.Tests.Functional (BDD Tests)

Created as a standard Class Library.  SpecFlow support for .NET Core will be available RSN (Real Soon Now).  Until then, I chose to set up a traditional SpecFlow project.  For this, I set the project properties to target .NET Framework 4.6, since that is compatible with .NET Standard 1.6.  

Everything proceeded as usual for a SpecFlow project, with one key exception:  Even though I have the QueueMover project producing .NET Framework 4.6 DLLs, I could not set up a project reference to it.  As a work-around, I succeeded in setting up a direct reference to the QueueMover.dll in the net46 folder in the QueueMover project.  With this in place, I can reference interfaces and classes in QueueMover as I implement tests, but debugging in the QueueMover is not possible in the context of a Specflow test.  Not acceptable.  

While the various alpha/beta versions and minor incompatibilities are yellow flags, this issue is the key red flag in my mind.  I am hopeful it can be resolved.

But, for now, on to Lambdas!

Extreme Remote: Code Ownership - The advantages of a TDD mindset

We have proposed a 'relay race' style of development where in-house and remote team members hand off active development on code modules as their work days start and end with the rotation of the earth.  This manner of work necessitates very granular requirements so that individual requirements can be completed within a day.  

Is this, however, just a nice theoretical way of working?  Isn't it common to get into a task and go down a rabbit hole when you discover a deep issue or a refactoring opportunity?  What you thought you could finish in the last two hours of the day turns into something much bigger.  So, then you end up not committing your changes and your counterpart on the other side of the globe doesn't have an accurate picture of your progress before they begin to work.  Duplicate and conflicting work may result.  

Yep.  That's going to happen fairly often.  But, having a TDD mindset can help with this scenario.  

The mantra "red-green-refactor" tells us to do the simplest thing possible to go from a failing test to a passing test, then think about how to make the code more beautiful and robust via refactoring.  With this approach, we can make the simple change that causes the test to pass and immediately commit the change.  Then, if we get deeply involved in some refactoring or deep investigation, we have established a clear indication of our progress.  Even our refactoring can be done incrementally.  At the end of the day, we can commit what we have accomplished so far and leave notes for what is left to do tomorrow. 

When our counterpart starts his/her work day, they will see which requirements have been completed and can move on to the next one.  Good notes in the code may also help indicate what issues you are planning to address the next day to prevent conflict.

This won't solve every issue that causes you not to finish a task within a day, but it is a very helpful manner of working in this sort of project.

Extreme Remote: Code ownership - Daily reviews

The current series of posts have been recommending short-lived feature branches to help in-house and remote developers to work on the same code modules.  Feature toggles and automated tests provide important protections while working in this manner.

Code reviews provide another key protection when code is being merged to the master branch regularly.  (While the relative merits of automated tests versus code reviews have been debated vigorously, the simple truth is that the best results are obtained when both are implemented effectively.)  

At Amplified Development, we recommend that all team members review all code merged to master in the previous 24 hours.  When these reviews are performed at the beginning of the work day, it will usually take about a half hour each day (for a small team of 4-6 developers).

Feature toggles, automated tests, and code reviews work together very effectively.  Feature toggles ensure new code never executes in a given environment until you want it to.  Automated tests ensure there is no regression in system functionality.  And, code reviews counteract the build up of technical debt.

All of this allows teams to work in a tightly integrated manner, achieving genuine code ownership at all times. 

Extreme Remote: Code Ownership - Automated tests

When you work with short-lived feature branches, feature toggles help you sleep at night by quarantining new code for partially implemented features.  Yet, there may still be a worry that somehow there was some lapse in feature toggle implementation and side-effects have crept in.

Automated tests add one more layer of security.  Automated tests cover both unit tests (associated with TDD) and integration/acceptance tests (associated with BDD).  When you are confident that your code has good unit test coverage and integration test coverage, you have sound reason to believe that feature toggles are correctly quarantining new code.  

Automated tests should be run by developers before merging code to the master branch.  In addition, a solid continuous integration process will run the full suite of automated tests and reject the merge if any tests do not pass.  (Note: You should run automated tests with various configurations of feature toggles to simulate the settings for each environment - test, staging, and production.)

Extreme Remote: Code Ownership - Feature toggles

In the previous post, I recommended short-lived feature branches to help keep in-house and remote developers working effectively on the same code modules.  This ensures that your in-house developers maintain genuine code ownership at all times.

But, short-lived feature branches cause incomplete features to be built and deployed to testing, staging, and possibly even production environments.  How can that be a good thing?  It isn't, if the code for the incomplete features is active in those environments.  What we need is a way to quarantine the incomplete feature code from the active code.

Enter feature toggles.

The feature toggle concept is very simple: Have a boolean value that indicates whether a feature should be enabled in the current environment.  Then, surround new code with a simple IF statement checking the feature toggle. In the simplest implementation, this boolean value can be a configuration setting.  There are also more sophisticated implementations where you can change the setting at run-time.

The developer will work with the feature toggle on, so that he/she can see the effects of their work.  The other environments (e.g. test, staging, and production) keep the feature toggle off until the feature is ready.  Once the feature is considered complete by the developers, then the feature toggle can be turned on in the test and/or staging environments, allowing for a testing period.  When it checks out thoroughly, the feature toggle can be turned on in production.

While the concept is simple, using feature toggles requires very thoughtful implementation.  I hope to describe some key considerations in a later post.  But, the key point here is that feature toggles help quarantine incomplete features while incremental implementation is merged into the master branch of the repository.

Extreme Remote: Code Ownership - Short-lived feature branches

We have been discussing the advantages of a 'relay race' style of development.  This approach has in-house and remote developers completing granular requirements and tasks each day, and their counterpart picking up progress as their work day begins.

Success with this approach requires articulating a good strategy for branching in the code repository.  There are two main strategies:

Long-lived feature branches - A feature branch is created when work commences on a significant feature.  Team members working on that feature contribute their changes to the feature branch only.  When the feature is complete, it goes through a process of testing and, when approved, the feature branch is merged to the master branch.  Sometimes, the merge process is challenging since conflicting changes may have been introduced over time from multiple feature branches.  To reduce this risk, many teams will merge changes from master regularly so that any merge issues are resolved in advance.

Short-lived feature branches - With this strategy, a feature branch is created when work on some small segment of work is initiated.  When each chunk of work is completed, it is merged to master - this may happen multiple times a day.  Merging from master regularly is still a good way to avoid merge surprises.   

(Some may observe that there is a third way: commit directly to master.  We view that short-lived feature branches serve essentially the same purpose.  Having a feature branch, however, gives a nice way to collect a coherent set of changes to be reviewed.  More on that later...)

At Amplified Development, we strongly recommend short-lived feature branches.  Applying changes to the master branch is an essential part of a continuous integration strategy.  And, it promotes the sort of team integration that Extreme Remote teams need.  

Applying changes to the master branch before a feature is complete, however, raises some challenges.  You don't want incomplete, unreviewed features appearing in a staging/test environment, much less production.  There are two key strategies to address the challenge:  feature toggles and daily code reviews.  Let's discuss those next...

Delightful Requirements: Scenario Names - Read well when collapsed

When I first began writing Gherkin scenarios, I tended to use scenario names as labels.  For example, I almost always started with:

Scenario: Happy Path

The subsequent scenarios would also often get a tag-like name.  For example:

Scenario: Required fields

Or

Scenario: Daylight savings time change

But, I have come to view scenario names as playing an important role in communicating to business stakeholders.  They provide an opportunity to articulate a behavior concisely and in expressive business-oriented language.

In fact, I love to see scenario names written in such a way that, when the steps are collapsed out of view, the scenario names read well as a description of how the feature works.  The collapsed view of the feature file becomes a valuable tool in communicating to certain audiences.  

For example, on a recent project, we prepared scenarios for a new feature and began to review them with a key business stakeholder.  It quickly became clear that he was finding the detail in the steps tedious.  So, we collapsed all of the scenarios down to their titles.  He was immediately more engaged, confirming scenarios and asking meaningful questions that gave rise to new scenarios.  Of course, we still collaborated with other business stakeholders on the details in the steps.  But, it was extremely helpful to have an intermediate level of detail that encouraged good communication.

Here is a simplified example.  Do you find this list of collapsed scenario names more helpful?

Happy Path
Required Fields
Valid id format

Or, would you rather discuss these scenario names with a business stakeholder?

Creating a widget adds a new widget to the system
Prevent creating a widget unless the widget name and category are provided
Prevent creating a widget with an id that is too long

I believe scenario names deserve careful attention for the purpose of readability.  Future posts in the Delightful Requirements series will offer suggestions for expressive, readable scenario names.

Delightful Requirements: Scenario name - Write for the business stakeholder

In an earlier post, I promoted the idea that Gherkin scenarios should find a sweet spot between the interests of business stakeholders and test implementers.  In this post, I want you to forget all that and ignore the test implementer and focus only on the business stakeholder... at least when writing scenario names.

Scenario names have no implementation code associated with them.  So, they provide an unique opportunity to be completely unencumbered by test implementation considerations.  So, flex your language skills!  Find the most meaningful way to express the requirement this scenario represents.

On one project, the requirements writers were spending a great deal of time studying legacy code to understand the requirements for a system rewrite.  After hours immersed in the code, we found them writing scenario names similar to this:

Scenario: Prevent assigning someone to a room that is already assigned to someone else on the new assignment's start date, where both the new and existing assignment start on the same day and the new assignment has no end date

Whew!  It showed meticulous understanding of the legacy logic.  But, what is it  

Extreme Remote: Code Ownership - Granular requirements and tasks

In the last post, we described a kind of 'relay race' development where in-house and remote developers work on the same code modules, but hand off active development as work days start and end with the turning of the globe. In order to achieve this type of development, requirements and tasks must be quite granular - granular enough to be completed within one work day.  

Gherkin-style requirements are ideal for this type of development since each scenario is typically granular enough to be completed in less than a day. Each scenario represents a very specific requirement.  A developer can usually satisfy several requirements each day. In order to 'pass the baton', his/her counterpart on the other side of the globe can simply run the automated tests to see precisely where progress was left off. After a review of the new code, he/she is ready to pick up active development of the module for the day.  

Not all development work maps nicely to Gherkin requirements.  (But, more than you many think!)  Even when you cannot describe the work with Gherkin, the tasks should be skillfully defined so that they are granular enough to be completed within one day.

With granular requirements and tasks, you can run the sort of coding 'relay race' that keeps both in-house and remote team members intimately familiar with the code at all times.

Delightful Requirements: Hide distracting details

One of the most common mistakes made when adopting Gherkin style requirements is to simply port existing test scripts into the Gherkin syntax.  The result is a scenario detailing every mouse click and value entered on the way to the behavior being tested.  It is extremely hard for a business stakeholder to focus on the requirement amid all of the details.

So, good Gherkin scenarios will hide all of the distracting details.  When testing a user interface behavior, hide as many navigation details as possible.  When laying out the data involved, list only the elements that matter to the test.  Other scenarios have likely described all of the intermediate navigation requirements and have focused on the other data elements.  Let the test implementer worry about how to get there and how to fill in all of the other fields with valid values.  Make it easy for your readers to focus on the specific requirement at hand.  

Hide distracting details!

Delightful Requirements - Examples are your friend

At Amplified Development, we are strong proponents of using examples liberally in your specifications.  Examples just make requirements better.

The main objection to examples is that they make test implementation more difficult.  This is true; but, the level of difficulty may depend on your strategy for test data.  (Test data strategies deserve a discussion of their own.)  But, regardless, you should work to overcome or mitigate the difficulties, because examples are just so important to good requirements.

So, why are examples so wonderful?

  • Examples ensure you are on the same page.  Have you ever had a discussion about requirements where, after investing quite a bit of time, you discover you aren't talking about precisely the same thing?  No?  Then, I'm guessing you haven't worked on many requirements.  As soon as you introduce an example, such misunderstandings get revealed.
  • Examples promote exploration.  When a requirement is expressed with an example, there is a strong tendency for stakeholders to raise other examples with interesting variations.  True, even without examples, it is possible to have a discussion of variations and exception conditions.  But, without examples, focus is given to the phrasing of the rule and the exploration of variations often suffers.  Its just human nature: give an example and someone will offer another.
  • Examples can clarify boundary conditions.  Without examples, you can try to express a rule with precision.  But, there is nothing like stating which date is in range and which is not, which number is under the limit and which is over.

As we explore specific ideas for delightful requirements, you will see this theme over and over:  Examples are your friend!

   

Delightful Requirements: Seeking the sweet spot

The goal when writing requirements in Gherkin for automation can be expressed this way:

It is the quest for the sweet-spot between lucid language and efficient test implementation.  

If we focus on just one goal without the other, we will frustrate one of our primary audiences.

Writing for business users only

We might write Gherkin scenarios that communicate effectively to the business user, but are not readily implemented by a developer.  For example, consider this scenario:

Scenario: Widgets must have ISO Standard temporal flubargle settings
  When a widget has incorrect temporal flubargle settings according to ISO 5432
  Then that is an unacceptable widget

The business user might like this requirement very much.  But, the developer implementing the test will have a multitude of unanswered questions:  What are correct and incorrect temporal flubargle settings?  What action is being evaluated in this test?  What does it mean for the widget to be unacceptable?  What behavior should the system exhibit for an unacceptable widget?

Writing for developers only

Even worse, we might write Gherkin scenarios that are tailored for the developer, but are incomprehensible for a business user.  For example:

Scenario: Widgets must have valid CF_101 and RV_999 values
  Given a widget entity with these fields:
    | CF_101    | RV_999 |
    | 10-2-2015 | 33     |
  When I POST that entity
  Then the HTTP status code should be 400

If you write API code for a living, you can probably picture how to implement that test.  But, the business user will need every clause of that scenario interpreted for him.  That will seriously inhibit the meaningful conversation a scenario is intended to initiate.

Writing for both audiences

Requirements writers earn their keep when they keep implementation in mind while they write in the language of the business user.   

Perhaps the scenario could be worded this way to serve both audiences:

Scenario: Widgets must have ISO Standard temporal flubargle settings
  Given this widget:
    | Sample Date | Flubargle Value |
    | 10-2-2015   | 33              |
  When I attempt to create this widget
  Then I should be prevented from adding this invalid widget

The search for the ideal balance has few absolute rules.  The right balance for one organization may differ from another organization.  The important thing is the constant focus on serving both needs.

Delightful Requirements

(Note: The series on Extreme Remote will continue as a weekly post next Monday.  At this time, we are launching a second series of posts with the theme:  Delightful Requirements)

Requirements.  You're gonna have 'em.  Whether they are a hallway conversation with the developer, or a ten-pound doorstop of a requirements document.  Somehow, there needs to be an agreement of what software should be developed before the code gets written.

In an agile development process, you should avoid Big Up Front Design; but, you still need requirements.  Ideally, the requirements are drafted some time before a sprint starts.  Then, during sprint planning, they are reviewed, adjusted, approved, and committed.  The form of these requirements may vary.

At Amplified Development, we are enthusiastic proponents of Gherkin requirements (aka Cucumber, SpecFlow, Behave, etc.).  Gherkin is a structured language that is designed to be easily understood by business stakeholders, and to be readily wired up to automated test code.  It has this basic structure:

Scenario: Feature performs desired behavior
  Given some prerequisite state exists
  When I take some action
  Then I can verify the desired behavior occurred

This is the first in a series of posts that describes best practices for writing Gherkin.  Your goal, simply put, is to write requirements that delight those who read them, whether they are business-oriented, developers, or quality assurance experts.  (It may be hard to picture anyone being delighted to read software requirements; but, for those with skin in game, well-written requirements just make every iteration, every day, go more smoothly.  And, who wouldn't be delighted with that?)  Business-oriented stakeholders should sense that you have captured how their business really works and have described a software system that will make their business work better.  Developers should readily form a clear picture of the software they must create.  Quality assurance experts should feel confident that little ambiguity exists and problem cases have been adequately covered.

We will first discuss a few high-level principles that will provide guidance towards high-quality requirements:

  • Seek the sweet-spot between meeting the needs of business stakeholders and those of developers.
  • Examples are your friend
  • Hide distracting details

Extreme Remote: Code Ownership - Everybody in the same code pool

When you have a team comprised of in-house and remote members, it is very tempting to organize the work so that each group works on separate sections of the project.  This helps each group work together effectively, have easy communication, etc.  Yet, there is a painful reality to be faced at the end of the project:  there are large portions of the project with which the in-house team is insufficiently familiar.  That tends to create a dependency on the remote team members beyond the planned end of the project. The goal should be to continue the relationship with remote team members because you were delighted with their work rather than feeling trapped by circumstances.

To avoid this, the project process should ensure that both in-house and remote team members are familiar with all parts of the system at all times.  The best way to ensure that is to have both in-house and remote developers working on the same code modules.  When a team is co-located, this can cause frequent conflicts; with an Extreme Remote team, however, the non-overlapping work hours allows for a 'relay race' style of development.  The developers 'pass the baton' at the beginning/end of each work period.

Here's how it can work:

(WH = Western Hemisphere; EH = Eastern Hemisphere)

  • WH Day 1 - In-house developer works all day implementing requirements, ensuring that work ends at a well-defined stopping point and changes are committed to the code repository.
  • EH Day 1 - Remote developer gets latest code, runs automated tests to ensure code is in a good state, and reviews changes from the in-house developer.  The remote developer identifies where the in-house developer left off in the list of requirements and works all day implementing additional requirements.
  • WH Day 2 - In-house developer gets latest code, runs automated tests to ensure code is in a good state, and reviews changes from the remote developer.  The in-house developer identifies where the remote developer left off in the list of requirements and works all day implementing additional requirements.
  • EH Day 2 - (You get the picture....)

With this approach, the in-house developer is intimately familiar with the code at all times.  If the remote developer must leave the team for any reason, the rate of progress will suffer, of course; but, it will only reflect the drop in man-power, without an additional learning curve.

To accomplish this style of work, there are specific techniques and processes that can help.  I will discuss these next.

Extreme Remote: Code Ownership

With traditional off-shore development, it’s hard not to end up with a code ownership problem.  When the planned project ends, only the off-shore team is truly familiar with the code base; even with best intentions, in-house developers are rarely ready to provide production support.  Your options?  (1) Continue to retain the off-shore team.  (2) Endure sub-optimal support while the in-house developers gain adequate expertise with the code base.  (And, it is possible to get an unpleasant surprise regarding code quality.)

A far better outcome would be for in-house developers to be deeply familiar with the code base when the planned project is complete and fully capable of production support.  A decision to continue the relationship with the remote team members would be based on a delightful experience rather than feeling trapped.  Of course, you would have the option to scale back to just in-house developers, as originally planned, with negligible transition pain.

To accomplish this, we need to break down the barriers between the remote and in-house team members.  In the next few posts, I will describe some models and techniques to promote deep familiarity with the complete code base for both local and remote developers.  

Extreme Remote: Product Management - Online conversations

Since the product owner is not within hearing distance of the remote team members (or not even awake when they are discussing issues), you absolutely must create a vibrant and effective online conversation.  Many tools such as HipChat and Slack support group chats.  Make this the primary way that issues get discussed.  And, have questions for the product owner clearly labeled so that they can focus on the product issues rather than all of the other technical discussions.

Extreme Remote: Product Management - Iteration Meetings

During the iteration cycle, there are some key meetings you can strongly encourage product owners to attend. 

 

Feature overview (aka ‘Story time’) – In advance of an iteration, you may wish to have a business-oriented overview of a system feature.  This is a great opportunity to have the product owner describe how an associated facet of the business works.  Many misunderstandings get pre-empted when developers can picture the business context.

Iteration planning – Early in iteration planning, you can review the requirements and any mockups.  The product owner may identify a misunderstanding that can be corrected before development begins.

Iteration review – The product owner should certainly be part of the iteration review, if at all possible.  Here is where any misunderstandings become plain to see.  If the product owner identifies course corrections at the end of every iteration, they can be implemented quickly and efficiently in the next iteration, keeping the project from straying off-course.

Note:  Product owner involvement in iteration meetings becomes even more meaningful when iterations are kept short.  Ideally, one week is best since it allows course correction very often.

For these iteration meetings, you may be able to request some extra flexibility from the remote team members.  Could they come in very early or be available late so that the product owner can meet at a more convenient time?  Depending of the product owner and the company culture, that may be important.

Extreme Remote: Product Management - Daily Meetings

Likely, to keep the team connected, there will be some form of daily meeting.  But, it is likely to be at odd hours that may not be convenient for product owners.  Don’t hesitate to invite them, though.  You may be surprised at what is possible.

On one project, we managed to have two executive stakeholders on a 9:30pm nightly online meeting with the remote team members.  Nice participation if you can get it!  Realistically, I don’t ever enter into a project expecting that level of product owner involvement.  But, take as much interaction as you can get!