I attended a talk by Dr. Lisa Porter on Friday. Dr. Porter is the director of IARPA, the intelligence community's equivalent to DARPA. She talked about "The Scientific Challenges of the Intelligence Community," and pointed out that for the intelligence community (and for agencies like NASA as well), data collection isn't the main problem. The analysis of increasingly voluminous mountains of data is the real challenge. How much of the solution might come from automation remains unclear.
She talked about a number of projects IARPA has been working on to try to address some of these challenges, but throughout the talk, I was bothered by something and I couldn't put my finger on it. The focus seemed to be on delivering the best possible analysis to the country's senior leadership. It sounded as if the question of how senior leadership is going to interpret the findings is beyond the scope of IARPA.
The day before, I had been reading a lot about the Challenger Accident as I was drafting a Teaching Note for a Challenger case study we are using in workshops. I was reading through a number of academic papers highlighting the communication aspects highlighted in the case study. As an intelligence analyst or an engineer, you would certain hope (perhaps even expect) that your findings will be interpreted as you intended them to be interpreted. As with any communication, however, the expectation that the intent of the message will be perfectly transmitted is misplaced.
Here is what was bothering me: The accuracy of the findings being presented does not necessarily correlate with good decision-making. On the other hand, incomplete or inaccurate findings can't do much to support good decision-making. I'm also not sure what "good decision-making" means.