Deploying -- Safely

Readings

Deployment: Client's View

http://media.photobucket.com/user/xmrsdanifilth/media/Funny%20Jokes%20Comedy%20Humor/10b7e3d6.gif.html

Deployment: What Can Go Worng?

us.thedailywtf.com

Deployment: What Can Go Worng?

This page has code to calculate a year, given a date in milliseconds. Test to see if it works. Enter a valid date after 1980 in the form MM/DD/YYYY. The millisecond equivalent will be passed to the code. If the code is correct, the Result field should display the year of the date you gave. (1, 2)

Enter date:
Or milliseconds:
Calculated year:

Deployment: What Can Go Right?

Continuous Deployment

Continuous Deployment: Getting There

So, Why Don't You Test?

  • Don't have time
  • Tests too hard to write
  • Running tests slows development
  • Tests have to be constantly updated
  • These are real problems...
  • ... so analyze and solve

Don't Have Time

  • Why not?
    • Client wants new code now!
    • Writing and running tests will delay delivery
    • Even more delay if tests fail!
  • Root cause:
    • Too much code in one bite to test adequately
  • Solution: Test-Driven Development
    • First, write code to test for the new functionality
    • Run and verify tests fail as expected
    • Code until tests pass
    • Deploy, and go back to writing new tests

Tests Too Hard to Write

  • Why?
    • Lot of boilerplate for running, displaying, and counting test results
    • Tricky cases when exceptions and input/output involved
  • Solution: use standard test frameworks
    • Acceptance test frameworks: Cucumber, Fitnesse, ...
    • Unit test frameworks: Junit, Rspec, PyUnit,

Running Tests Slows Development

  • Why?
    • Running all the tests takes minutes
    • Many manual steps needed to run all the tests
    • Lot of junk to clean out of database afterwards
  • Solution: automate, speed up, decouple
    • run automatically on separate continuous integration server
    • fast decoupled unit tests with mocks

Tests Have to be Constantly Updated

  • Why?
    • Every UI change breaks acceptance tests on the whole system
    • Every database refactoring breaks low-level unit tests
    • New data in real database breaks tests on aggregate reports
  • Solutions:

Types of Testing

  • User tests: detect what is and isn't working with users
  • Acceptance tests: define just-in-time requirements for each user story
  • Unit tests:
    • define and document the intended behavior of every unit of code
    • detect new code that breaks old code (regression)
  • Integration tests: detect inter-module communication failure
  • Stress tests: detect scale-up issues

Acceptance Tests

  • Client and developers define acceptance tests for each user story
    • current iteration stories only!
  • Typically a new product will eventually end up with dozens to a hundreds
  • Many tools exist to make these more readable by clients

Unit Tests

  • Written by developers
  • Meant to test code units (functions, methods, classes)
  • Need to be numerous, fast, automated
    • if not, they won't be run
  • Frameworks for writing and running unit tests exist for all modern programming languages
    • Rails: rspec, Cucumber
    • Python: PyUnit
    • Javascript: Jasmine, QUnit
    • Don't write your own framework!

Test Driven Development (TDD)

  • When writing a new unit of code
    • Write test code for it first
    • Run all the unit tests
      • Do the new ones fail (or pass) as expected?
    • Write the code to make all tests pass, and no more
    • Clean up (refactor, etc.) after all tests pass
    • Repeat
  • Tests are the requirements -- way better than words
  • Creating tests first leads to better designed application program interfaces (API)

Integration Tests

  • Written by developers
  • Test collections of communicating modules
    • should include all major communication paths
  • Are typically fewer and slower than unit tests
  • Failure should lead to new unit tests, e.g., module B failed when called by A
    • if A sent bad data, add unit tests on A to catch that
    • if B failed to handle good data, add unit tests on B to catch that

Integration Tests

  • On every update to the code repository:
    • Build and deploy to a production environment
    • Run all tests on that deployment
    • Stop development if any build step or test fails

Stress Tests

  • Can the web site handle many users, hostile attacks, thousands or millions of data items, months of up-time, ...
  • Use automated tools to simulate multiple users
  • Use tools to profile where your bottlenecks are
  • Biggest Pitfall: Doing this too soon.
    • Worry about scale after you succeed.

Becoming Better Testers

Bugs and Tests

  • When a bug happens, don't fix it right away
  • Write a unit test that reliably reproduces the bug
    • If you can't, you don't know what the bug is
  • Now write code to pass the test and fix the bug

Browser Testing I

Browser Testing II

  • Install multiple browsers
    • at least IE, Firefox, Safari, Chrome, (Opera for mobile browsing)
    • you can have multiple versions of IE
    • visually inspect your pages at least once a week in every browser you support

Browser Testing III

Continuous Integration

http://www.javaworld.com/javaworld/jw-12-2008/jw-12-hudson-ci.html

CI Challenges

http://www.javaworld.com/javaworld/jw-12-2008/jw-12-hudson-ci.html

Continuous Integration Options

Continuous Integration Guides