Devil’s Advocate: Don’t Be Agile

Continuing our series on the pros and cons of popular technical topics for non-technical managers. We look today at the concept of Agile Development and how it will really #$%* you and your developers over.

As a developer, there are three words I dread hearing when I’m interviewing at startups:

We are agile-ish.

Or really any variation thereof: agile inspired, agile influenced, kind of agile. UGH. Going agile is like going gluten-free: doing it half-way causes more pain than not doing it at all.

So just don’t be agile.

The Spirit of Agile is Trusting Your Developers to Lead

To non-technical managers Agile is appealing because it means getting results fast. Indeed, agile promises a MVP in your hands ASAP, a quick launch, followed by growth growth growth.

Who wouldn’t love that?

But the problem with Agile Development is that most non-technical managers do not really do much research into why Agile methodology gets results so quickly. They don’t realize that Agile aims to move the decision making power out of their hands and into the hands of their developers. They think that by importing the format of Agile– sprints and scrums and whatnot– efficiency will just naturally follow. They micromanage, which is exactly what Agile Development is trying to prevent, and when delays hit they micromanage even more.

Agile Development aims to destroy the typical model of “the waterfall” where in orders of what to build come down from above for programmers to execute. That doesn’t mean that non-technical team members have no place in the process. On the contrary, non-technical team members, particularly customers and other stakeholders are a vital part of Agile Development.

But in order to work, programmers in Agile need the freedom to experiment, and to execute without getting caught up in multiple levels of management approval. Agile aims to create small teams where consensus and collaboration are easier.

What typically happens in “Agile-ish” situations is that as developers work on initial product specs, they must constantly come back to the non-technical manager for clarification or adjustment. As the non-technical manager tries to save time and money, acting as Agile Development intends– that is experimenting with one approach, collecting feedback and evolving– become nearly impossible. Building in systems to collect feedback are resources not devoted to the core product and are usually put off until “later”. Testing out new ideas is resented, as non-technical managers have already decided what they want to be built and perceive these experiment as time and money taken away from the real product. (This is particularly true when the dev team is working freelance)

Agile Development relies on the assumption that you trust the judgement of the people you’ve hired and therefore do not feel the need to dictate every element of every decision to them. After all, what does the manager’s opinion on the position or color of a button matter if you’re going to experiment with different options and choose the one that is the most successful with the customer?

But too often non-technical managers get caught up in policing their technical staff in order to eliminate waste. They are afraid of the false starts that Agile Development tells them to embrace. They want Agile, but they want an impossible hybrid of Agile where all the decisions about the product are made in the beginning and are all 100% right.

This is not Agile Development at all. The first rule of Agile Development is to assume that you’ve gotten some aspect of the product wrong and to structure your process around systematically identifying and correcting those mistakes. Even if by some miracle you do manage to get everything 100% correct the first time, business situations change. Products need to evolve.

Yet in the Agile-ish development cycle the product must first be built exactly as the non-technical manager wants it. If the non-technical manager has overlooked a decision or a problem comes up, the dev team must wait for commands. To take initiative and to try a solution without the non-technical manager’s approval is insubordination. Feedback from customers are cherry picked and reinterpreted by the manager. When one component of the product is finished, very little effort is “wasted” talking with stakeholders. It’s more important to move on to the next feature.

And so it goes…

Agile + Remote == Death

What is it that people do when they want to be Agile without actually giving up the control necessary to be Agile? They take on the structure of Agile Development without any of the philosophy and end up with an impossible boondoggle in code. We’ve all experienced the horrors of failing Agile: the 15-min stand-ups that last for two hours, the endless series of planning meetings, “sprints” composed mainly of bug fixes and tweaks because not enough time was budgeted for proper testing and code review.

Agile relies on free flowing communication between members of a small team. Co-location, while not absolutely essential, is considered extremely important.

When your team is remote, especially when they are spread out across timezones, the type of informal collaboration and communication Agile aspires to becomes very difficult to achieve. As a result the daily morning “stand up” becomes the primary (and sometimes sole) method of communication between team members. Instead of fifteen minutes touching base, these conference calls become impossibly bogged down with conversations that would have naturally happened throughout the work day if everyone was working out of the same space.

It is possible for a remote team to be Agile, but it is very difficult … especially when the team members are strangers to each other.

Becoming Agile: Spirit First, Process Second

The goal of Agile Development is to rid dev teams of bureaucracy by throwing out the restrictions of stuffy management processes. It is therefore ironic that the first thing non-technical managers do when going Agile is ignore Agile Development’s core philosophy and skip straight to implementing their processes. Agile development, when poorly done, has become the very monster it was intended to slay.

Its core principles read something like a eulogy now:


  1. Customer satisfaction by rapid delivery of useful software
  2. Welcome changing requirements, even late in development
  3. Working software is delivered frequently (weeks rather than months)
  4. Close, daily cooperation between business people and developers
  5. Projects are built around motivated individuals, who should be trusted
  6. Face-to-face conversation is the best form of communication (co-location)
  7. Working software is the principal measure of progress
  8. Sustainable development, able to maintain a constant pace
  9. Continuous attention to technical excellence and good design
  10. Simplicity—the art of maximizing the amount of work not done—is essential
  11. Self-organizing teams
  12. Regular adaptation to changing circumstances


If you aren’t willing to sign up for that, just don’t be Agile.

Will Doctor For Food: Exploring Medicare/Medicaid Open Payments Data

About a month ago, an alert came across my desk (well… metaphoric desk anyway): the Centers for Medicare & Medicaid Services had released updated data downloads for their Open Payments program. When I followed the link through to check it out the following warning greeted me:

Some datasets, particularly the general payments dataset included in the zip file containing identifying information, are extremely large and may be burdensome to download and/or cause computer performance issues. […] Be advised that the file size, once downloaded, may still be prohibitive if you are not using a robust data viewing application. Microsoft Excel has limitations on the number of records it can display, which this file exceeds.

Indeed some of CMS files are as much as a GB of data. And here I thought “Hey, I have a company for this” (so yeah, if you want to poke through the CMS Open Payments data all of it is on Exversion right here)

APIs are nice 🙂

That being said, one wonders exactly what you can do with Open Payments data. It’s natural to look at the words Medicare/Medicaid and assume these are all medical bills, but actually it’s a lot more interesting than that: [1]

This data lists consulting fees, research grants, travel reimbursements, and other gifts the health care industry – such as medical device manufacturers and pharmaceutical companies – provided to physicians and teaching hospitals.

Well now that sounds pretty nefarious. I mean come on, we all know that the money moved around through gifts and grants influences the type of treatments doctors recommend. So now the government is giving you an opportunity to look directly at that activity.

The fact that they took something really interesting and wrapped it up in the most uninteresting way possible is to be expected. It’s a government thing.

Identified -vs- Deidentified Datasets

If you check out our collection of CMS data the first thing you’ll notice is that each data type is split into two different sets: identified and deidentified datasets. This is no–, as I first assumed– the same data with identifying information removed (I admit that this wouldn’t actually make any sense to begin with but in my defense I’ve seen the government do MUCH worse with their open data). Instead the de-identified is a collection of cases where some of the necessary data about who received what is missing or ambiguous.

Otherwise what the CMS released fits three categories:

  • General Payments: Payments or other transfers of value not made in connection with a research agreement or research protocol.
  • Research Payments: Payments or other transfers of value made in connection with a research agreement or research protocol.
  • Physician Ownership Information: Information about physicians who have an ownership or investment interest in an applicable manufacturer or GPO.

Looking At the Data: Who Get the Most Research Dollars?

Essentially what CMS has released is just a dump of their database. Each files has what feels like twenty or more columns, most of which have no information in them. The benefit of accessing this data through an API as opposed to downloading the file and trying to work with that is that we can segment the amount of data we’re looking at before committing any computer memory to the task.

The first thing we did was rearrange Research Payments to look at how much money each state received for the year 2013. Because this is a smaller dataset, we wrote a python script to iterate through each page of data returned by the API, sort through and rearrange as needed. This is not recommended for super large datasets as you will hit our api’s rate limit pretty quickly, but for this size it wasn’t an issue. We used python to write nice clean json we could copy and paste into d3.js and create an interactive map (click through to see)

interactive map

Along the way we discovered something funny. All the payment data was between the months of August and December. After a little research we discovered that this is a relatively new thing for CMS. The mandate to release this information was part of the Affordable Care Act. As 2013 is the first year, they could not collect a full year’s worth of data.

That means 2014’s files will be EVEN LARGER.

Will Doctor For Food

Anyway, we wanted to poke around this General Payments file, it seemed like the most interesting stuff would be there. But the identified version is over a GB… kind of unpalatable.

Luckily with Exversion we can take a sample and play around with that instead. How about 50,000 records? Fetching 50,000 records and analyzing them took seconds. All we had to do is add the ‘_limit’ parameter to our request:

I bet if I told you Big Pharma was paying physicians and teaching hospitals in FOOD you wouldn’t believe me, but here’s the breakdown of that 50,000 record sample:

Type of Payment Number of Payments Total Amount
Food and Beverage 40302 $1,037,941.01
Gift 55 $80,035.93
Consulting Fee 924 $1,917,576.16
Grant 382 $3,702,231.73
Travel and Lodging 2710 $862,276.81
Compensation for serving as faculty 90 $194,884.57
Royalty or License 100 $6,913,971.38
Current or prospective ownership or investment interest 4 $529,830.08
Entertainment 108 $5,977.24
Compensation other services 2787 $3,256,475.94
Honoraria 27 $95,666.70
Education 2436 $456,251.12
Charitable Contribution 14 $97,773.60
Space rental 61 $72,862.55

The interesting thing here is that in our sample the vast majority of payments are tiny amounts related to wining and dining doctors and hospitals, but that does not add up to the most money spent. No, much more money is spent in royalties and granst, but to only a handful of institutions.

So there you go. Now you can play around with CMS’s Open Payments data without worrying about choking your computer. Can’t wait to see what the rest of the internet does with this.

Blaming Victims: How Stats Frame Our Perspective

We all know that you can manipulate the way statistics are presented to change their meaning, but you probably haven’t given much thought to the way their presentation affects how you see the world. I’m not talking about trusting a misleading statistic and believing in a policy or position that isn’t true. I’m talking about the way statistics influence how we define problems and consequentially what solutions we spend time and energy looking for.

Consider crime. Most crimes have two sides to them: those who commit the crime, and those who are victims of the crime. But the statistics we collect are inevitably obsessed with the victims. Google “Odds of being murdered” and hundreds of reports come up with authoritative sounding numbers. Here’s one from The Economist. Here’s another from Yale University.

Now try finding the odds that you will BECOME a murderer.

That’s not nearly as easy, which is odd when you consider it is virtually the same set of numbers that we have already collected. We just need to change what we’re counting.

And yet… Here’s an informal back-of-the-napkin calculation from Deadspin on your odds of knowing a murderer. Here’s something from Reuters about gun ownership increasing the risk of suicide or murder.

There are of course studies exploring the odds that a convicted murderer will kill again. There are odds of you getting AWAY with murder. There are stats on the male/female breakdown of roles when murder’s do happen. But there are virtually no statistics on your odds of one day killing someone.

On the surface this might seem like a trivial, almost obnoxiously pedantic issue. Why would anyone ever need to know their odds of committing a crime? You can control whether you commit a crime! Being a victim of a crime involves a certain amount of chance, so of course knowing your odds and how those odds are influenced by certain factors must be useful in protecting yourself.

But there’s one very big problem with this type of thinking: it automatically focuses us on solutions that prevent (or otherwise decrease the odds of) victims becoming victims, instead of preventing criminals from becoming criminals. From an individual point of view, looking for ways to minimize your risks makes a lot of sense. As an individual you can’t control anyone else and you might have little recourse after the fact. You focus on your decisions and behaviors because that is what you can actually do something about.

However, the same cannot be said for society as a whole. Society does have the ability to tell people what to do, and the power to enforce consequences when those prescriptions are violated. One would think society would also have invested interest in minimizing the number of criminals. Criminals, generally speaking, are not fully productive contributing members of society. From the society’s point of view criminals cost way more than victims.

And yet we devote a fraction of the narrative to exploring the factors that lead to people becoming criminals. About the only time you will see any statistics on this topic is when discussing low income neighborhoods, and even then the stats are usually the odds of a person ending up in jail.

Not everyone in jail deserves to be there.

Sexual Assault and Statistics
You cannot possibly develop a solution for a problem if there is no discussion of the problem in the first place. If the conversation does not happen, people do not think about it. If people do not think about it, they do not recognize opportunities for solutions.

And in this case, by ignoring one half of the criminal-victim dynamic, we may also be ignoring the most effective solutions.

Consider rape. You probably realize already that a lot of the potential “solutions” to sexual assault end up asking potential victims to submit themselves to a ridiculous series of seemingly arbitrary dress codes, behavioral rules, and institutionalized paranoia. When those provisos fail it is assumed that the victim did not follow them carefully enough.

There are many situations that may lead to a sexual assault. Walking down the wrong street. Wearing something provocative. Getting drunk at a party. Dating a creepy guy.

Yet the same situations could just as easily NOT result in rape. For all our work collecting stats to protect victims, we actually don’t have much information as to why that is or how much this presumed “bad behavior” actually increases your risks. Unlike the lack of data on criminals, this isn’t a deliberate bias. Such data is really hard to collect.

Nevertheless, the consequences of framing the problem of sexual abuse with the odds of becoming a victim are that solutions that perspective provides are not all that effective at minimizing the rate of sexual abuse. After all, if wearing the right things, not hanging out with strange men, not going out alone, prevented abuse Saudi Arabia would have the lowest rate of violence against women in the world (spoiler alert: it doesn’t)

Really all the victim bias does is enforce a state of terror in the perceived potential victims … who in fact might not be the most likely victims in the first place (for example the majority of rape victims in the military in 2012 were men). So it’s not just a state of abject terror, it’s a state of pointless abject terror.

What would happen if instead of having stats like this beaten into our heads at every conceivable opportunity:

– 1 in 5 women will be raped
– 30% of them will be raped by people they know
– Every 2 minutes someone somewhere in America is sexually violated

…we were constantly reminded of stats like these (all made up):

– 1 in 5 men will commit rape
– You are ten times more likely to rape your partner than a stranger
– Every 2 minutes someone in America is sexually violating someone else

Even though the second set may seem unnecessarily antagonistic (almost Minority Report-esque in its assumptions) it has the unique effect of changing the focus of the problem. While there are millions of uncontrollable and unpredictable contributing factors that might lead up to a victim being raped, there’s really only one factor that leads to a person becoming a rapist. Rape is a choice– perhaps not always a MALICIOUS choice (ie – statutory rape), but nevertheless a choice. No one accidentally rapes another person. No one commits a rape because they happen to wear the wrong thing. One could argue the occasional outlier case of rape-by-miscommunication, but one can’t deny that if the focus of the public conversation was “how do we keep people from becoming rapists?” rather than “how do we keep people from getting raped?” would-be “unintentional” rapists would probably take more precautions in ensuring consent is clearly articulated, thereby eliminating these cases.

In other words, reframing the statistics changes how we try to solve the problem to emphasize decisions we actually have control over. As a woman I cannot predict what hemline is long enough to avoid provoking lurking rapists, but the rapists themselves can easily choose not to rape.

It is tempting to assume that because we theoretically welcome free and open discussion, we are able to see all sides to an issue with very little mental exertion. But really we are programmed to see certain sides and rarely if ever look beyond that. Statistics in this sense provides a false sense of security because it is not obvious how they can be framed to completely remove large parts of the situation from consideration.