Smart Contracts

Updating Solidity code and Testing a Smart Contract

Books on the Blockchain

Publica Self Publishing

Goodbye Contracting

Hello brave new old world...

Ruby-Selenium Webdriver

In under 10 Minutes

%w or %W? Secrets revealed!

Delimited Input discussed in depth.

Friday, 31 July 2009

9000 hours of billing with no single bug found yet

In a discussion on Test Republic at Pradeep Soundararajan summarised the evil, immoral, corrupt dark heart of the commercial side of the testing profession so well I wanted to make sure his words were captured here so they wont be lost. The commercial reality is just that, a reality that we can’t escape but I hope we can generate amazing revenue while adding value and avoiding fleecing our clients. Too many consultancies have been sued because they didn’t – you know who you are.

Reply by Pradeep Soundararajan on July 28, 2009 at 6:17pm

Thanks for not posting it earlier. It gave me an opportunity to get you post it.

I do exercises in my exploratory testing workshop that demonstrates that those who seem to care so much about RBT aren't actually caring about it. As Michael Bolton pointed out somewhere in Test Republic that those who profess so much about documentation themselves don't bother to read and write good documents.

A good RBT requires skills that people appear to be reluctant to build. By saying that I don't mean, "Yeah, I have built it". I have been trying to develop it as much as possible and constantly practicing it so that I am prepared for a war anytime.

My perhaps more militant stand against RBT (the misuse of terms and vocabulary for ‘coverage’ aside) is because the Behaviour Driven Testing approach I advocate more and more these days focuses on behaviour relevant to the user. Ticking off Requirements against test cases is of no use to the user. Answering the questions above and formulating testing around them AND the requirements is. "

Exactly. You would call it misuse and businessmen would call it fair-use. I am starting to realize that the scripted approach survives because there is more money out there for business people through that.

Mark, outsource work to me:

I would spend a couple of days analyzing your requirement document and bill you for X hours per person involved in my team.

I would spend a couple of days writing a test plan document ( and not refer to it ) and bill you for 2X hours per person involved in my team for preparing it.

I would spend a month or two writing test case document ( and refer only to it ) and bill you for 10X hours per person involved in my team for preparing it.

I would then again create a traceability matrix ( just to fool you and your boss about our coverage ) and bill you for 5X hours per person involved in my team for preparing it.

18X hours per person involved in the team. Assuming X is 50 hours and there are about 10 members in my team, that's 18 * 50 * 10 = 9000 hours of billing with no single bug found yet. If you are paying $20 an hour per person on an average, you would have actually given me a business of $1,80,000 without me or my team finding any bug yet.

Then comes the test case execution cycles and more billing. Why wouldn't a businessman be glad about the traditional approaches to test software?

Lets bother about the users of your product later during our maintenance billing phase ;-)

Tuesday, 28 July 2009

Lack of vendor support for Open Source

The lack of vendor support is a real issue for Open Source and Free tools. It may seem logical that paid-for-tools are going to get superior support by the folks who have actually created them and have a commercial interest in promoting them.

This can certainly be true and NMQA (who I work for) have the Vienna test management tool that we created and so support both through paid service contracts and queries raised by the test community. However, the matter isn’t as simple as proprietary tools get superior support over Open Source or Free ones.

For example, NMQA also offer a Selenium-Ruby automation framework (in various forms) )that we support as aggressively as Vienna that we wrote fully ourselves. The reason being is that we see no difference between the two in terms of the support a customer needs. For NMQA there’s no difference in the support a customer needs for a proprietary solution we’ve developed and written in proprietary code or an Open Source / free framework constructed of open source code that we’ve set-up for them.

It’s when a customer tries to hit the internet and read online documents and forum postings to do it themselves the trouble starts. Think about that for a second, inexperienced staff trawling through spurious sources of information as the way to learn and implement a key technology, what a ridiculous strategy. Yet it’s the one often taken. Open source tools are not an easy solution to adopt unless there is expertise available, in-house or via a consultancy. The learning curve that inexperienced internal staff will take on is usually too great a burden for organisations to support and won’t deliver anywhere near as fast as is needed. Add to that the lack of trusted sources of information and we begin to see why organisations are shying away from Open Source.

There’s the issue – organisations trying to wing it on their own with Open Source solutions will mean they suffer more pain than if they buy proprietary tools and a service agreement. The best way is to engage a consultancy or specialist individual who can provide the same level of support you’d get buying a support contract for a proprietary tool and that way there’s no difference between proprietary or Open Source solutions. Later on the difference is saving tens of thousands in service contracts as insurance in case something goes wrong, also ridiculous.

Mark Crowther.

Wednesday, 22 July 2009

Code Coverage with Test Cases?

It hasn't really struck me until now - Why do testers think of coverage in terms of code - why aren't we thinking of coverage in terms of what the system does or should do? i.e Behaviour?

Thinking in terms of what the system should do is why we're testing isn't it? Isn't that what the customer / user wants us to be making sure before they get the software? Isn't focusing on behaviour how we assure that UAT is a success?

Never in my career have I ever really done 'code coverage'. I've played with it, talked about it with developers, even helped define accaptable coverage levels but I've never run 'code coverage tools' and declared my tests cover xx% of code. The developers I've worked with have, I remember this being the way when I was at EA. They put together Unit Tests and Component Integration Tests, ran the tools or did-the-math and declared coverage at a certain %.

What I have done however is declared coverage of Requirements. I've analysed what Test Scenarios exist, things I would do with the software to demonstrate the Requirements had been delivered on (coded the right thing), then Test Cases to exercise the Scenarios (coded the thing right) and find those lovely bugs.


It's testing with a focus on Behaviour


What are your Selenium challenges?

It seems that when using Selenium natively there are a number of common challenges people encounter. Here’s my list of things I encountered and thought “Hmm... how do I do that then?”
What would you add? What have you struggled with or are struggling with now?

• Dealing with pop-up windows
• Testing dynamic text or content
• How to go about testing Flash
• Capturing screen shots, either to file or in some form of report
• Iteration of the test case, running repeatedly with minor changes
• Data Driven Testing, pre-cooked data or generating on the fly
• Generating useful test status reports
• Setting up Remote Control
• Setting up Grid

There is a way around all this, by using an outsourced software testing partner, such as my own company Test Hats. However, with a little work you CAN fix these issues yourself. Now that Selenium 2 is out, some of these have gone away.

Thoughts? Leave a message!