I was recently involved in a discussion about the role of QA, faux-agile, and development. How should they work together? What's the proper role of each?
This was my reply... it felt like a blog entry, so I turned it into one. :)
QA should be heavily investing in automated tests (both unit, package level, and integration) tests that are being moving into your continuous integration (CI) system. At least 1/2 to 3/4s of the QA team should be able to write tests in the framework of choice. (This is assumes you started with an interactive test team.)
Developers should also be adding automated tests... running all these tests in continuous integration puts a huge dent in the amount of QA time that needs to be done.
You never freeze your code during an iteration. I've heavily edited code (and had team members heavily edit code) the day it went to customers. Having a solid automated test suite, running cross platform, enables practices like "ruthless refactoring" and provides developers and QA staff a powerful level of confidence that the product still works after any level of editing.
I have taken the results of an iteration and called it "frozen" so interactive testing can have an iteration (or two) with non-moving target. The described philosophy of freezing during an iteration is a direct result of the long iterations. If your iterations are measured in months, they're not iterations. They're releases.
Sometime consultants try to jam agile into waterfall because it's what they understand. It's just a waste of time. Agile is a different mindset.
What does "done" mean? To me, it's either done enough to pass on to QA or a customer.... OR the tests pass. If I'm writing a package or API that can't be customer demo-ed, the tests can still pass and prove it works. If you're doing something "to big" for an iteration, you can still write unit tests or package level tests to prove it works.
Enjoy!