Tuesday, December 29, 2009
Saturday, December 12, 2009
Sunday, September 13, 2009
CIO Risk: Know the Difference Between Boldness and Recklessness
Boldness is based on the ability to understand the difference between a smart, calculated risk and a foolish gamble. A smart, calculated risk is an action that is not a certain success but one that has potential to deliver extraordinary rewards compared to the risks taken. A foolish gamble is one that delivers a smart reward at the cost of risks bearing dire consequences (p.31-32).
Taking a risk when there is a possibility to have damages that will never be recovered should not be part of the CIO arsenal for his approaches in decision making. The decisions should be participatory if it is possible to use collective team knowledge, and to prevent the CIO from making wrong decisions and creating disasters.
M. Hugos (2007) provides a case from his practice that illustrates this situation. A CEO was not able to recognize the benefits of listening to his team while making a decision whether to replace an old power generator with a new one, or to try to repair the old one. The CEO's frugality was more reckless than rational. The power generator's purpose in the office was to be a source of provision power in case of a power outage. During hot summer days the team realized that their 18 year-old generator was malfunctioning. After numerous unsuccessful attempts to repair and tune it, the team manager requested a purchase of a new generator. The CEO refused to fund this because he didn’t regard it as necessary. He was not willing to evaluate the risk of the current situation to see the magnitude of the consequences. If a power outage was longer then half an hour, then the batteries would be dead because the generator could not charge them. The entire IT system would experience a hard crash, and the damages resulting from the crash would be catastrophic. The off-site disaster recovery facility was not operational at that time, and the on-site hardware was not covered by a warranty for cases like this. It would take weeks to restore the system, and it would cost much more then the purchase of the generator. Some of the company's important customers would experience business losses due to the unavailability of the system and potential data loss from the hard crash.
The CEO's ignorance of these risks was caused by the need to have his budget look good. This was a clear sign of the CEO's incompetence, and created a culture of mistrust among team members. Instead of a readiness to be accountable for their actions, people were forced to cover themselves by writing memos to be safe in case fingerpointing would arise. The team members' voices raised regarding the situation were not trusted by the CEO and their opinions were stubbornly rejected, and the collective wisdom didn't count. A culture like this would result in pushing talent out of the organization and creating a team of mediocrity.
Resources:
M. Hugos. 2007. Harnessing IT to drive enterprise strategy. CIO best practices. p.31-33
Monday, August 24, 2009
What role did NASA’s culture play in the Columbia disaster
The concept of the shuttle as a reusable vehicle was created in 1970s as a result of NASA's budget cutback. The implemented technologies were experimental, revolutionary and innovative, but there was pressure to make it look routine to buy-in customers. NASA's promises of the reliability, efficiency, and safety of the shuttle had provided substantial funding for the program, but funding still was not enough for the complex design specification. So, some important safety features like an escape system for the crew were not part of the design. Driven by the schedule demand, the final steps of the Columbia's development, including tiles mounting, were not done in the manufacturing facility in California but at the Kennedy Space Center in Florida by engineers. The testing regiments were deviated and analytic models were used to verify the entire system. That was not a normal procedure. Only after the disaster proper tests did identify the technical problem of the Columbia's final flight.
A variation of this problem occurred in the very first of Columbia's flight in 1981 and repeated in every flight since then. It was the foam debris strike during the liftoff that would lightly damage Shuttle's tiles. For all these 22 years this damage became systematic and was not treated by NASA as a serious issue requiring an immediate resolution. Over the time, the defect turned into an accepted risk and a maintenance item after every flight and was treated as a routine. I think this was one of the reasons that it was so hard to hold anyone accountable for the tragedy.
The disaster of another Shuttle, Challenger, in 1986 was caused by a similar constructional issue. The shuttle with 7 crew members aboard perished. The lesson still was not enough for NASA to prevent Columbia's tragedy. NASA's cultural practices of playing “Russian roulette” and the cultural approach of “prove to me that there's something wrong,” instead of “prove to me that it is right” became a crucial barrier for any action to handle the existing foam debris strike damage risk as an elevated severity issue. In the 2002 Atlantis's flight, the foam damage was the “most severe of any mission yet flown”. This didn't strike NASA's management attention and didn't lead it to conduct some serious investigation or at least to install better tracking video cameras to observe and document the damaging events as a source for engineers to better understand the impact of the damage. The pressure to follow the schedule for subsequent mission flights has prevailed over any safety concerns when the Shuttle Program decided to fly Columbia. Sad enough is the fact that the values of the mission's scientific experiments was not high or critical and even was criticized by describing as “childish and elementary”.
The better images of Columbia's left wing taken from the military satellites could have help significantly in assessments of the damages happened during the liftoff. Unfortunately, requests for these images had not followed official procedures and were canceled because of miscommunication. Even from looking at the not very clear existing NASA's own images, engineer Rodney Rocha was very alarmed and concerned about the possible consequences. Despite the fact that event was classified as “out of family”, i.e. not previously experienced, it ended up in the log documented by Mission Evaluation Room managers as a “low concern”. The written guidelines for the “out of family” events were to form the high efficient, well trained Tiger Team to work with agency contractors to analyze the situation. Instead of that ad-hoc group, Derbies Assessment Team (DAT), was formed. This group had a very vague charter and was not familiar with the established procedures of escalation to handle the situation and in particular to make requests for additional data. DAT did not report to the Mission Management Team (MMT), who was making all important decisions, and there was no direct contact between these two teams. Unfortunately, the analysis of possible damages provided by Boeing after using a mathematical tool called Crater had a historical exaggeration in prediction and was discounted by the DAT. The MMT didn't follow the Space Shuttle procedure of meeting daily during the mission. I think this fact is important in the puzzle of all organizational issues NASA experienced. Management's adherence to the schedule for the upcoming flights, the lack of communication, and lack of Shuttle's crew involvement to analyze the situation were the biggest factors in making the wrong decisions. NASA's defensive reaction that there was nothing that they could do is very disturbing and is proof of culture of ineffectiveness and negligence created in the agency.
Refernces; Richard M.j Bohmer, Amy C. Edmonson, Michael A. Roberto. Columbia's Final Mission. Harward Business School.
Friday, July 24, 2009
HBR Case Study – Hewlett-Packard: The Flight of the Kittyhawk (A)
Prior to the Kittyhawk project DMD had a very profitable position in this established markets. Its high-performance products within 5.25-3.5 inch drives were very competitive due to their drivers having a higher megabytes capacity than the industry norm. The division was making plans for the new 1.3 inch drive disk based on the existing market trends and treating it as a sustainable innovation. The company's plans were to expand its computing market share and make HP a major player in the disk-drive industry.
The main causes for Kittyhawk's failure were as follows:
- The company had failed to recognize the new product as a disruptive innovation that was not ready to compete in the existing market
- The DMD picked the wrong target customers and built the wrong product because corporate expectations left no other choice
- The product could not storm any other emerging market in as short a period of time as was expected
- Positioning on the market was done not based on realistic market opportunities but by the company's aggressive revenue expectations
- Existing computing market trends were driven by capacity and cost per megabyte, not by size. Kittyhawk didn't offer any value for the established markets
- Even having a substantial opportunity in emerging markets, DMD attempted to please customers in established markets, where performance expectations were high. It included features that made Kittyhawk too expensive to satisfy customers in emerging markets.
What could HP have done differently to support the Kittyhawk development team, and implement the marketing strategy to introduce the Kittyhawk product?
- Attract and retain resources experienced in developing new architectures or cultivating emerging markets
- Take into consideration the history of HP's average cycle time for a new disk-drive development time of 18 months and align the project schedule accordingly
- Don't make a statement that this product will be the company's future
- Not try to analyze markets for this product. Markets did not exist yet
- Use in-house manufacturing to provide flexibility while market demand is formed and until the right product is developed.
Friday, July 3, 2009
Symptoms of Ineffective Governance occurs in both IT Governance and PMO organizations. How are they similar in both, and how are they different?
Symptoms of ineffective governance are usually listed as follows, first, IT Governance, and then PMO, if there are similarities.
- Senior management senses low value from IT investments.
- IT is often a barrier to implementing new strategies.
- Mechanisms to make IT decisions are slow or contradictory.
- Senior management cannot explain IT governance.
Senior management is not supporting a standard project management methodology and PMO policy.
- IT projects often run late and over budget.
- Senior management sees outsourcing as a quick fix to IT problems.
- Governance changes frequently.
Additionally, PMO weakness will be propagated by evidence of any of the following factors: cross-functional teams are not efficient and productive; stakeholders are not involved and are unaware of the expectations of the project; project management is not valued in the organization; there is lack of consensus regarding the value of the PMO to the organization; the organizational PMO maturity is not evolving.
The competency of personnel is a key issue for PMO performance. The inability to attract and retain competent people is a strong symptom of PMO inefficiency.
Resources:
- Peter Well and Jeanne W. Ross. (2004). IT Governance. How Top Performers Manage IT Decision Rights for Superior Results. Boston, MA: Harvard Business School Press
Wednesday, May 6, 2009
Developing Talent
- Lyle Spencer -
Sunday, April 26, 2009
Quality & Performance
DeMarco and Lister in their study found that there are no correlation between productivity and programming language, years of experience, or salary. Their study showed that providing a dedicated workspace and a quiet work environment were key factors in improving productivity. They suggested that top management must focus on workplace factors to improve productivity. 2
1 Eisenberg, Bart, "Achieving Zero-Defects Software," Pacific Connection ( January 2004)
2. DeMarco, T & T. Listner, Peopleware: Productive projects and Teams. new York: Dorset House, 1087.
Tuesday, April 21, 2009
Project Management Software
Open Source:
http://www.dotproject.net
http://taskjuggler.org
Low-end:
http://business-spreadsheets.com
Collections:
- www.projectreference.com : Site created by former Columbia University instructor with lots of great links
- www.allpm.com : The Project Manager's Resource Center
- www.4pm.com : Project Management Control Tower
- www.pmblvd.com : Project Management Boulevard
- www.tenstep.com : The Tenstep Project Management Process Methodology
- www.projectsatwork.com : Projects @ Work
- www.chiefprojectofficer.com : Chief Project Officer
- www.gantthead.com : Gantthead.com
- www.pmforum.org : Project Management Forum
- www.projectnet.co.uk : ProjectNet
- www.cpbonline.com : The Center for Business Practices
- www.irnop.org : The International Research Network on Organizing by Projects
- www.maxwideman.com : Max's Project Management Wisdom
- www.infogoal.com
Saturday, April 11, 2009
Selecting a system development approach is an important business decision because it can have a big impact on the the time, cost, and end product of the systems development. Depends on the willingness and capability of the organizational change that can be involved in system development different level of risk and return should be taken into consideration. Businesses today required to build applications rapidly to stay competitive. This involves a lot of different departments in requirements gathering and most of cases produces the need of business processes change during identification of pain points and deciding which way to go to resolve problems. Managers should be aware of the different ways of approaches, evaluate and choose the right one depending on the type of organization, it's own resources and desired controls over the process. All stakeholders should participate in the selection process.
During the Business Process Reengineering the first step of identifying which business processes need improvement and have a highest priority requires a strategic analysis and pain points determination by Senior Management. Identifying and describing existing processes, understanding the process costs and process duration bring to the next step of decision how to improve these processes and can involve different layers of organization and even multiple companies if they are part of the shared processes.
2.Some have said that the best way to reduce system development costs is to use application software packages. Do you agree? Why or why not?
If the organization doesn't have internal resources and have pretty standard business processes that can easily be set up in the application software package, it is a good way to go in case of right choice of the best vendor solution. Some vendors are specialize in industry preset solutions for their systems. If organization has an unique requirements, then customizing the packages can be very costly and create a hassle for the future software upgrade and maintenance.
Sunday, April 5, 2009
- Summary of article The Experience Trap. By: Sengupta, Kishore, Abdel-Hamid, Tarek K., Van Wassenhove, Luk N., Harvard Business Review, 00178012, Feb2008, Vol. 86, Issue 2
“As projects get more complicated, managers stop learning from their experience. It is important to understand how that happens and how to change it.”
The authors of the article made a research on experience-based learning in complex environment. They used a computer-based game to simulate managing a software project from start to finish with a goal to be on time, within budget and with the highest possible quality. In the experiment the simulation games were setup to examine the decision making process of experienced managers in a variety of different context. The results of the experiments showed that managers did not take into account the consequences of their previous decision when they need to make a new decision.
When people make a decision they base on their previous experience and knowledge. In a simple environment cause-and-effect relationships are easy to discover, but on the complex ones, such a software projects, it not always work like that. Authors recognized three causes of the breakdown in the learning and suggested ways for organizations to enable learning form experience in the complex projects.
Time lags between causes and effects. Example - hiring a new team member during the project creates time lag for hiring and assimilation.
Fallible estimates. In software project initial estimates usually terns out to be wrong. Managers don't do correction on the productivity estimates during the project.
Initial goal bias. During the project the scope usually become bigger. Sticking to the initial targets actually create counterproductive outcome.
To fix the experience learning cycle:
Provide more cognitive feedback.
Apply model-based decision tools and guidelines.
Calibrate you forecasting tools to the project.
Set goals for behavior, not targets for performance
Develop project “flight simulators”.
Experiments showed that learning on the job will work only in the simple environment, not complicated. Mangers to be successful need to get more formal training and have decision support tools tailored to their specific projects.
Saturday, April 4, 2009
Can You Say What Your Strategy Is?
- Summary of article Can You Say What Your Strategy Is? By: David J Cillis and Michael G Rukstad , Harvard Business Review, April 2008