Chaos in Computing http://www.chaosincomputing.com A little code, a little chaos, a little joy Thu, 12 Apr 2018 14:00:18 +0000 en-US hourly 1 Code Review Criteria http://www.chaosincomputing.com/2016/09/code-review-criteria/ http://www.chaosincomputing.com/2016/09/code-review-criteria/#comments Tue, 27 Sep 2016 03:59:39 +0000 http://www.chaosincomputing.com/?p=224 Continue reading "Code Review Criteria"]]> Code Review Criteria

I have had the opportunity to review a lot of automation test code and to work with teams to begin peer reviewing each others code.  I felt it would be useful to codify and share some of the criteria I use.  And just to be clear, this criteria is what I use for the automated User Interface and Acceptance tests for web applications.  Many of this may apply to other test types or system, but no warrantees are expressed or implied if used outside of its intended use.

> Coverage

Does this test code cover the cases that we think it should?  Every automated test is designed to cover one or more use cases. It is surprising, but it is not uncommon for a use case to be dropped, or not truly covered.  Or just as important for a test case to cover more that it is intended to cover, which is just as much of a problem.  See the bonus section below on how to detect extraneous coverage.

> Fixed Waits

When testing a user interface, it is common to wait for an action to have an effect before testing the results.  However waits should always be for some condition and continue as soon as it is met.  A fixed wait has to be set to the longest possible time plus some extra leeway (in case future changes require more time).  The test will always have to wait this long and it makes the test very inefficient.  And some day the test will start taking longer and your fixed wait will not be enough and you will start getting “random” failures.

> Third Party Dependencies

In a complex application (web or mobile applications in particular), some features may be dependent on third party services that are unrelated to what you are actually testing.  Any automated tests need to avoid being dependent on these third-party services.  You have to look closely in a peer review to catch these.  Third party dependencies are a major cause for unreliable tests.  See the bonus section below for methods to detect third-part dependencies.

> Separation of test data / code / model

Watch for test data mixed with code or with your data model.  These should all be kept separate to make test maintenance and trouble shooting easier.

> Hard Coded Values

When you see hard coded values in the sprinkled in with you test code you should call them out.  Any hard coded values should preferably be configuration driven so that you can override them on a per-run basis, or consolidated into a single area where they are easy to find and update.  Furthermore you should be sharing a limited set of values and not use unique values for each test.  Again, this makes it easier to update and maintain.

> Performance

Watch for issues that will slow the tests or add inefficiency.  Look for unnecessary or duplicate steps.  They should be eliminated.

> Assertion Messages

When a test fails it should be clear what the underlying issue is that needs to be looked at.  When you see an assertion, ask yourself “How would this appear in the error report and would I understand it?”  The failure messages should make it clear what is being tested and why it was failed.

> Complex Code

Complicated code is difficult to understand, review, troubleshoot, and to maintain later.  There is usually some why to simplify it.  Even if it means breaking code up into separate parts that do more narrowly focused functions and having more total code.

> Race Conditions

Race conditions are the hobgoblins that cause many “random” test failures.  They can be hard to spot in test code, but they are there.  Watch for cases where things sometimes happen in a different order, but the test is dependent on only a single ordering.

> Custom Code

Watch for custom test code that does not use the agreed on generic routines. Your generic or library code is meant to be well tested, performant and maintainable.  Custom one-off code in individual tests needs to be caught and corrected.

—— Bonus ——

> How do I detect extra coverage or third-party dependencies?

This usually seems difficult, but I have found a very simply way to find the extra things covered by a test or third-party dependencies.  Ask your self, “What would cause this test to fail?”  If the answer is something like “If the bob system is down”, then you have a dependency on the bob system in your test.  And if the bob system is not part of the build you are testing, then it is a third-party dependency.  If you answer with “if feature x fails” then your test also covers feature x.

It seems simple, but I find that asking this one little question to be very enlightening and helps me to find unintended dependencies.

]]>
http://www.chaosincomputing.com/2016/09/code-review-criteria/feed/ 1
Can’t is a 4 Letter Word http://www.chaosincomputing.com/2016/07/cant-is-a-4-letter-word/ http://www.chaosincomputing.com/2016/07/cant-is-a-4-letter-word/#respond Tue, 05 Jul 2016 16:06:52 +0000 http://www.chaosincomputing.com/?p=217 Continue reading "Can’t is a 4 Letter Word"]]> Can’t is a 4 Letter Word

What is the most offensive word that you know?  Would you use it at work with your colleagues or with your manager or team lead? Maybe you would under certain circumstances.  How do you react when you hear your colleagues using it around you?  I certainly don’t mind a certain amount of swearing, and use it myself from time to time.  But there is one word that I really can not stand.  And after having heard it used on me four times one day recently, I just about lost it.

Warning: Rant In Progress
Warning: Rant In Progress

So what offensive word am I referring to? The word is “can’t“.  That is right, can’t is one of the most offensive words that I know.  And using it with me may result in an unpleasant response.  Unfortunately I hear it way to often in a business setting.

In almost every instance when someone tells me “can’t“, what they really mean is something else.  Here are some examples.

I can’t do that.

In my experience, this actually means “I don’t want to do that”. This is almost as bad as the people that use “That isn’t my job”.  And don’t get me started on that.

It can’t be done.

This usually means “I’m too lazy to figure this out”.  Don’t use this cop out.  Because I am likely to start asking you what options you have tried.  You could at least try “I’m not sure how to do this, do you have any suggestions?”

You can’t do that.

Sometimes this is the same as “It can’t be done” above, with the same meaning.  But sometimes they are trying to tell me what I can and can not do. In which case this really means “I’m too lazy to figure this out, and I assume you would be too lazy as well”.  Don’t tell me what I can or can’t do.  Ever.

In my experience there is never anything that can’t be done.  Sometimes it takes some research, or requires learning a new skill, or a little help, or it could be expensive, or maybe there are are some unpleasant side effects.  Those are all merely obstacles.  And obstacles can always be circumvented; you just have to figure out how.

So the next time that you find yourself speaking to your colleagues or you team lead or manager and are about to say “can’t“.  Remember that it can be done.  And consider your words carefully.

Ok, I’ll try to calm down and end my rant for now.  If you have any more examples of people using can’t with you, go ahead and leave them a comment along with what you think they really mean.

]]>
http://www.chaosincomputing.com/2016/07/cant-is-a-4-letter-word/feed/ 0
Test Automation: Calculating Business Value http://www.chaosincomputing.com/2016/06/calculating-business-value/ http://www.chaosincomputing.com/2016/06/calculating-business-value/#comments Wed, 01 Jun 2016 19:14:27 +0000 http://www.chaosincomputing.com/?p=200 Continue reading "Test Automation: Calculating Business Value"]]> In a previous article I discussed how to prioritize a test automation backlog.  An important part of prioritizing and tackling a test automation backlog is to identify or calculate the business value of the each of the tests.

Overview

Calculating the business value for an automated test is similar, but somewhat different, to calculating value for developing a product feature. The business value for building test automation for a particular feature is calculated based on four factors.  The first is the percentage of affected site visitors for the feature under test.  This initial value may then be modified for the financial impact of the feature.  Next the value may be further modified to account for the probability of failure.  The last factor to consider is the incremental impact for the feature.  These are each defined in more detail below.

Definitions

  • Feature under test: A feature of a product which can be independently tested against functional requirements.  A product will be made of many different features.  In most cases, the features of a product are similar or the same as the individual product requirements.
    • Example: The Login feature of the Gigya community product.
    • Example: The Pin It feature on a photo gallery photo.
  • Use Case: A feature will typically have a number of different use cases.  This could be the feature used on different templates, as displayed for different users, or other variations.

Business Value factors

  1. Impacted Users: Set business value based on the number or percentage of site visitors that are affected by the feature under test.  If all or most site visitors will be affected by the feature then this would be high impact.  If a very small percentage of users are affected then it would be low.
    1. Example: Recipe reviews on Food Network appear on nearly every recipe and are used by many users viewing a recipe.  Display of recipe reviews would be a high impact feature.
    2. Example: Change profile photo appears on the user account setting page.  The percentage of site visitors that would typically use this feature would be extremely small making this a low impact feature.
  2. Financial Impact: If there are unusual financial impacts related to this feature it may modify the value.  The feature could be tied to a partnership which drives extra revenue, there could be legal obligations which could have direct or indirect financial consequences, or there could be unusually high impact to brand reputation.  Any of these may increase the automated test value for this feature.  There may also be features which have no ads and are not reachable from Search Engines (no SEO value) and therefore have a lower than normal financial value which would decrease the test automation business value.
    1. Example: The user registration process includes capturing user age to comply with COPPA requirements.  A failure of this feature could expose us to legal action and financial penalties (Up to $16,000 per instance).  This would increase the value of tests for this feature.
    2. Example: The home category photo galleries have an integration with WayFair.com.   A failure of this feature may impact the revenue from this partnership.  However the actual revenue from this partnership may be very minimal at present and so it may not affect the business value at all.
    3. Example: There may be a feature that involves a page that contains no ads and is not indexable by search engines.  In which case the financial impact may be lower than normal.
  3. Failure Probability: Some features have a history of failure or are known to depend on complex or fragile code or infrastructure which would create a greater risk of failure.  Of course the feature may also be considered to have a lower than normal risk of failure.  In either case this may adjust the overall business value up or down.
  4. Incremental impact: The business value for test automation for a feature can sometimes be affected by other features which already have test automation.  This may be due to an overlap in the features, but it more commonly impacts additional use cases for a single features.  When there are multiple use cases, the tests for the first use case may cover the underlying features and code used for several use cases.  This may somewhat reduce the incremental value of adding the additional use cases.
    1. Example: The Asset Title component can be used with many different temples. The business value for automated tests of the Asset Title on an article is high.  Extending the tests to cover Asset Title on Photo Gallery are low (but not zero) due to the fact that the component logic and behavior are the same between the two templates.  However the Asset Title component has some different behavior when used on a Episode template where it has different logic (related to the underlying show asset) in handling some of the default behavior.  So in this case it would have less value than the original use case on Article but more than adding to photo gallery.
    2. Example: The user login feature supports several different social networks for authentication (Facebook, Twitter, Google, Yahoo).  A test for one of these would cover much of the underlying technology and code used by all of them, however there are some parts that are also different.  Therefore the first social network test would have a higher value and each subsequent one would have less incremental value.  

Stakeholders

Calculating business value for automated tests may involve several individuals.  First, the product owner for the product in question is in the best position to know the details used in the calculation.  Secondly, the lead engineer for the product would have additional insights, such as code fragility (used in Failure Probability) which would be valuable as well as possibly being in possession of much of the information available to the product owner.  Finally the test automation engineers and the automation architect will have details about related tests or experience with calculations for other products.

Next: Identifying tests not to automate.

]]>
http://www.chaosincomputing.com/2016/06/calculating-business-value/feed/ 2
Prioritizing a Test Automation backlog http://www.chaosincomputing.com/2016/06/prioritizing-a-test-automation-backlog/ http://www.chaosincomputing.com/2016/06/prioritizing-a-test-automation-backlog/#comments Wed, 01 Jun 2016 18:42:52 +0000 http://www.chaosincomputing.com/?p=202 Continue reading "Prioritizing a Test Automation backlog"]]> Starting a test automation strategy may seem daunting due to the very large number of features and use cases that will be identified for automation. Determining where to start and the order to tackle the backlog of hundreds or thousands of potential tests can be paralyzing.  I recommend three techniques to organize the test automation backlog.  First, should this test be automated at all?  Second, what is the business value of this test? Third, what is the difficulty or effort to implement this test?

Remove any test from the backlog that should not be automated.  Then use the business value and difficulty/effort metrics to prioritize the backlog. Start with the highest value, lowest effort tests first and schedule them to be worked on.  I would not worry about scheduling out more tests at this point.  When you have worked through the first list of scheduled tests, your should reevaluate your backlog again and schedule the highest value, lowest effort tests.  You will find that you will have added items, or that the business value or effort will have changed since your first reviewed the list, so reevaluation is important.

When you review your prioritized list and see items that have a higher effort then business value, these should not be worked on.  These would seem to be an example of tests that should not be automated at all.  After all, if the effort exceeds the value, then they would clearly be a poor choose for automation.  However I don’t suggest that you simply remove these from your unscheduled backlog.  The reason is that over time the effort to automate some of your more difficult tests will probably drop.  Your tools and frameworks, as well as your teams growing skills, will reduce the effort for many of these tests over time.  Also the effort may decrease as a side effect of implementing higher value tests.  Review these items along with the other items on your backlog periodically.

Calculating the effort to implement a test is something that most automation engineers will grasp fairly easily.  Depending on the team, this may be in story points, hours, to-shirt sizes, or some other technique.  However calculating business value and identifying tests that should not be automated (outside of business value and effort) is not so straightforward.  I will cover these in detail in future articles.

Next: Calculating Business Value

]]>
http://www.chaosincomputing.com/2016/06/prioritizing-a-test-automation-backlog/feed/ 1
Controlling Sauce Connect http://www.chaosincomputing.com/2016/02/controlling-sauce-connect/ http://www.chaosincomputing.com/2016/02/controlling-sauce-connect/#comments Sun, 14 Feb 2016 03:57:20 +0000 http://www.chaosincomputing.com/?p=185 Continue reading "Controlling Sauce Connect"]]> Programmatically controlling a Sauce Connect Tunnel

I recently decided that I wanted to create and control Sauce Connect tunnels from within my own test code and could not find any examples of how this could be done.  After some research and experimentation, I was successful and so I wanted to document the method that I am using in case anyone else finds a need to do the same.  Note that my test framework is written in Java and the method that I outline should work with any JVM language.  Modifying this to work with another setup is left as an exercise to the reader.

Sauce Connect

I have been writing test automation for some of our websites recently and leveraging Sauce Labs when I needed specific browsers, browser version, and operating systems for those tests.  Using Sauce Labs resources to test our non-public development and test sites requires the use of Sauce Connect to create a network tunnel between the Sauce Labs datacenter and our datacenter.  I have borrowed the diagram below from Paul Hammant to show such a setup.

from Paul Hammant's blog
Sauce Connect tunnel example

The existing documentation around Sauce Connect describes how to setup and start a tunnel manually and to run it as a long running service.  This was not suitable for my needs.  If you do more digging you will find a Jenkins plugin that will handle starting and stopping tunnels for you as part of your jobs.  After evaluating this I found it also did not meet my needs.  And finally if you dig really hard you will find a Maven plugin that will start and stop your Sauce Connect tunnel as part of your build process.  This came closer to what I was looking for, but still I was looking for more control.

More Control

What I have implemented allows complete control of a Sauce Connect tunnel from within test code.  This allows test specific controls such as unique tunnel naming, test specific black lists, wiring the tunnel to a proxy server, among other things.  It actually turns out to be fairly simple to accomplish this by leveraging work that has already been accomplished by others and just taking it a little further.  I will first the code you need, then delve a little into what it is doing.

    Dependencies

Add the following dependency to your project.  Be sure to use the most recent version.

<dependency>
 <groupId>com.saucelabs</groupId>
 <artifactId>ci-sauce</artifactId>
 <version>1.111</version>
</dependency>

    Start Tunnel

Process tunnel = sauceTunnelManager.openConnection(
 sauceUser,      // username
 sauceKey,       // apiKey
 port,           // port
 null,           // sauceConnectJar
 tunnelOptions,  // Tunnel options
 null,           // printStream
 null,           // verboseLogging
 null            // sauceConnectPath
 );

    Stop Tunnel

sauceTunnelManager.closeTunnelsForPlan(
  sauceUser,      // username (same as start tunnel)
  tunnelOptions,  // tunnelOptions (same as start tunnel)
  null);

Explanation

This code uses the Sauce Connect Jenkins plugin to do the heavy lifting work.  This turns out to be exactly what the Maven plugin is doing.  The ci-sauce library actually contains all the code for the windows and unix versions of the Sauce Connect software.  When you you make a call into the library to the openConnection() method, it extracts the appropriate software and runs it in a separate process.

When you call closeTunnelsForPlan() you have to pass the same user and tunnelOptions that you used when you started the tunnel so that it identifies the correct tunnel to shutdown.

Important options

  • sauceUser : Your SauceLabs username.
  • sauceKey : The API key from your SauceLabs account.
  • Port : The port you want the tunnel to listen on. Null will use the default port of 4445. A zero value will use any open port.  This is important if you need to run multiple tunnels on the same computer.
  • tunnelOptions : This is a string of command line options to set things like the tunnel name, proxy settings, black list patterns. See the Sauce Connect documentation for available options. See the example below for formatting.

Example tunnelOptions

--tunnel-identifier TunnelName@Env-0001 --fast-fail-regexps www.unstable.com,www.thirdparty.com

Other thoughts

  • Updates: Sauce Labs releases updates to the Sauce Connect software pretty frequently.  To upgrade the version of the tunnel software your tests use is as simple as updating the version of the CI-Sauce dependency in your project.  Just like that, the new version is downloaded and used in your tests going forward.  Very simple.
  • Tunnel Name: It is important to ensure that the name of your tunnel is unique.  If you use the name of an existing tunnel, then the other tunnel will be shutdown automatically when your new tunnel starts.  I use a naming convention based on the test parameters and a random number.

Resources

]]>
http://www.chaosincomputing.com/2016/02/controlling-sauce-connect/feed/ 1
JavaScript parameter passing http://www.chaosincomputing.com/2015/08/javascript-parameter-passing/ http://www.chaosincomputing.com/2015/08/javascript-parameter-passing/#comments Wed, 12 Aug 2015 22:07:21 +0000 http://www.chaosincomputing.com/?p=165 Continue reading "JavaScript parameter passing"]]> How does JavaScript pass parameters into functions?  I have seen a lot of confusion and misunderstanding on this topic for several years.  Even among people that consider themselves JavaScript experts.  I have seen articles and posts that state Pass-By-Reference and I have seen others that state it sometimes uses Pass-By-Value or Pass-By-Reference depending on the argument type.  These are all wrong, so I thought I would take a stab at explaining exactly how it works and why it is so misunderstood.

To start with lets look at the several different methods used by various programming languages to handle parameter passing.  The two most common are Pass-By-Value and Pass-By-Reference.  There are some other less common methods such as Pass-By-Name that I have used, but we’ll skip over those since the are not applicable here.

Pass-By-Value

In the Pass-By-Value method, parameter variables are copied onto the stack and then made available to the function.  Once the function returns, and the stack is popped, then the copies are gone.  This means that the the variables in the outer scope will never be modified, no matter what the function my do to them.  This provides a nice safety net to ensure that a function does not accidentally modify something that it shouldn’t.

Pass-By-Reference

In the Pass-By-Reference method, a reference (pointer) to the variable is created, placed on the stack, and made available to the function.  Once the function returns, and the stack is popped, then the reference is gone.  However  unlike in the Pass-By-Value method, any changes made to the referenced variable will be visible in the outer scope.  This can be more performant when dealing with large data structures since you do not have to create copies of the entire value.

JavaScript (aka ECMAScript)

So which method is used by JavaScript?  JavaScript always passes function arguments by value.  But I’m sure you have seen many cases where functions modify the objects in the outer scope.  This is why people often think that Pass-By-Reference is being used.  To see what is going on, lets look at a few examples.

Example #1 – primitive

var outCount = 5;

function example1(inCount) {
    inCount = inCount + 1;
}

example1(outCount);
console.log("Output: " + outCount);
Output: 5

In this example the console is logging “5” and not “6”.  The changes within the example1 function are not visible in the outer scope.  This shows that the parameter was passed in by value.

Example #2 – object

var bob = {firstname:"Bob", lastname:"Smith"};

function example2(person) {
    person.firstname = "Sally";
}

example2(bob);
console.log("Output: " + bob.firstname);
Output: Sally

In this example the console is logging “Sally”. The changes made within the example2 function are visible in the outer scope. So it appears the the parameter was passed by reference. But let’s look at another example before we jump to conclusions.

Example #3 – object part duex

var bob = {firstname:"Bob", lastname:"Smith"};

function example3(person) {
    person = {firstname:"Sally", lastname:"Smith"};
}

example3(bob);
console.log("Output: " + bob.firstname);
Output: Bob

In this example the console is logging “Bob” rather than “Sally” like in the previous example.  The changes made within the example3 function are not visible in the outer scope.  So in this example it appears that the parameter was passed by value and not by reference.  We need to look under the covers and see what is going on.

Under the covers

In all three examples above JavaScript is passing the parameter into the function by value.  The reason that the behavior looks different is because of the type of variables we are passing and the types of modifications we are making, which are different in each case.

The variable used in example #1(outCount) is an integer number.  It is stored internally in memory as 8 bytes to represent the value.  This type of variable is referred to as a primitive.  When we pass the value into our function (as inCount) JavaScript is making a copy of the 8 bytes and putting them on the stack for the function to access (as inCount).  The copy is then thrown away after the function returns.

The variable in examples #2 and #3 (bob) is an object.  It is stored internally as two parts.  One is a block of memory on the heap that contains the representation of the object.  The other is the reference to the object in memory.  This second part is what is actually stored in the variable.  When we pass this variable into our function (as person) JavaScript is making a copy of the reference and putting that on the stack of the function to access.  The copy of the reference is then thrown away after the function returns. However, the contents of the object which are accessed by the references (both bob and person) are shared in both the inside and outside scope and it is never copied.

This means that when we make changes to the contents of an object passed into a function (which is what example #2 is doing) then those changes will persist after the function returns. If however we make changes to the reference itself, such as pointing it to a new object (which is what example #3 is doing) then that change is discarded after the function returns and the outer scope is unaffected.

Conclusions

JavaScript always passes function parameters by value.  If the variable being passed is an object, then the function has access to the object contents in a shared memory location through a reference.  Changes to the object contents will be visible in the outer scope after the function returns, however changes to the reference itself will not be visible in the outer scope after the function returns.

Examples of primitives in JavaScript include numbers, booleans and strings.  But be wary, because if you instantiate your variable with the new operator then it will be an object rather than a primitive and it may not behave the way you expect!

var x = new String();    // This string is an object

I hope you now have a better understanding of what is going on in JavaScript functions. Mis-understanding how this works can (and does) led to bugs that can be difficult to diagnose. Hopefully your new understanding will save you time and trouble in the future. If so, leave me a note to let me know!

]]>
http://www.chaosincomputing.com/2015/08/javascript-parameter-passing/feed/ 1
Woman in Technology http://www.chaosincomputing.com/2014/07/woman-in-technology/ http://www.chaosincomputing.com/2014/07/woman-in-technology/#comments Mon, 14 Jul 2014 01:36:51 +0000 http://www.chaosincomputing.com/?p=139 Continue reading "Woman in Technology"]]> I attended the CodeStock development conference in Knoxville recently.  One of the sessions that I attended was the Woman in Technology panel. Michael Neel was on the panel and he made an interesting comment.  He stated that while there were woman in technology companies, woman did not give keynote addresses at technology conferences.

Michael is one of the founders of CodeStock and in 2010 he choose Rachel Appel as the keynote speaker for the event.  Michael was quoted as saying “For the record, I wanted Rachel to Keynote before I decided on the Woman in Technology theme. It was the result of looking around and trying to find another technology conference that has had a female keynote speaker. I realized I couldn’t find one!”.  Michael brought this up during the WIT panel discussions and challenged anyone to name another technology conference with a female keynote speaker prior to 2010.

I immediately thought of my favorite keynote address by Marissa Mayer at the 2008 Google I/O conference entitled “Imagination, Immediacy, and Innovation… and a little glimpse under the hood at Google”.  At the time, Marissa was the VP of Search and User Experience for Google.  I did not attend the conference that year, but I watched the recorded sessions after the event.  I was so impressed with Marissa’s keynote that I setup a lunch session and invited my entire development team to come and watch it with me.  It is obviously a little old now, but it is still a fascinating look behind the scenes at Google.

I would like to add a thank you to Cory House.  His keynote at CodeStock this year inspired me to respond to this particular challenge with a blog post rather than an email.

If you have been to a technology conference with a female keynote speaker, share with the class!  Leave a comment with the conference and who the speaker was.

]]>
http://www.chaosincomputing.com/2014/07/woman-in-technology/feed/ 16
CodeStock 2014 http://www.chaosincomputing.com/2014/07/codestock-2014/ http://www.chaosincomputing.com/2014/07/codestock-2014/#comments Sun, 13 Jul 2014 02:39:46 +0000 http://www.chaosincomputing.com/?p=133 Continue reading "CodeStock 2014"]]> CodeStock

I have been attending CodeStock 2014 here in Knoxville the last few days with a group of my software and quality assurance engineers.  Several were even speakers this year!  Thanks to the conference organizers for putting together another great conference this year!

I did not plan to speak, but I got pulled into presenting a Lightning Talk.  I gave an “Intro to Content Delivery Networks” Lighting Talk.  I have given a much longer talk that delves into the Akamai CDN and how we use it and the many ways it is misunderstood.  It was fun to do a shorter, more basic, introductory talk on CDN’s in general.

I want to congratulate Christine Jones on her first conference talk on “Intro to Android Development”.  It was a nice way to remove some of the barriers to jumping in and doing your first application.  I also want to congratulate Michael Phelps on his great talk on “There is no silver bullet: approaches to scaling agile and being a better team”.  It was interesting to see how agile was used on such a large scale project with so many moving pieces as the Food Network website.  I believe this was his first conference presentation as well.

Cory House @ OutlierDeveloper gave some food for thought in his Keynote “Becoming an Outlier” on why and how you should take your own path and stand out from typical developer.  For those that find they are not passionate about their current job, how to reboot your career gives some paths to find your way out.

I wrapped up the conference by catching Michael Neel give his “The Quest for Fun” talk and I am glad I did.  It came highly recommended by another developer.  I would not normally have gone to a session on game design, but not only was it an interesting and engaging talk, but it did provide some insight into the website development that I typically oversee.  While we do no build games, the fact that our websites are ad supported and must appeal to consumers that have other chooses means that some of the same thought that goes understanding your audience for game design also applies to our website development.  It definitely provided some food for thought.

See you at CodeStock 2015!

]]>
http://www.chaosincomputing.com/2014/07/codestock-2014/feed/ 1
Speaking at Agile Knoxville http://www.chaosincomputing.com/2013/08/speaking-at-agile-knoxville/ http://www.chaosincomputing.com/2013/08/speaking-at-agile-knoxville/#respond Fri, 02 Aug 2013 21:56:45 +0000 http://www.chaosincomputing.com/?p=111 Continue reading "Speaking at Agile Knoxville"]]> I will be speaking at the Agile Knoxville meeting on Aug 6th.  See their website for details.  The topic will be Distributed Scrum.  It should be a good time.

Update

The Agile Knoxville session was really great.  There was a good set of people.  Thank you to everyone that attended, you all had good questions and feedback. And thank you to Adrian for forcing me to come speak. 🙂

 

 

]]>
http://www.chaosincomputing.com/2013/08/speaking-at-agile-knoxville/feed/ 0
Agile Development Conference 2012 http://www.chaosincomputing.com/2012/11/agile-development-conference-2012/ http://www.chaosincomputing.com/2012/11/agile-development-conference-2012/#respond Thu, 08 Nov 2012 21:11:36 +0000 http://www.chaosincomputing.com/?p=86 Continue reading "Agile Development Conference 2012"]]> I was invited to speak at the Agile Development Conference in November in Orlando.  It was a great experience and I’m glad I agreed to do it.  I have received some very positive feedback from some of the attendees at my talk.

My talk was entitled “Distributed Scrum: Dangerous Waters – Be Prepared”.  See my talk page for more details about the conference and my talk.

Distributed Scrum: Dangerous Waters – Be Prepared

I want to extend my thanks to the conference organizers that invited me and for providing guidance.  I also want to thank the members of Scripps Networks communications and marketing departments for their assistance with my presentation and for providing  company swag to give out.   Thank you!

Finally I also want to thank the people that showed up for my talk.  Thank you for showing up!  I hope you found something in my talk useful.

 

 

]]>
http://www.chaosincomputing.com/2012/11/agile-development-conference-2012/feed/ 0