Leader Due Diligence – Psychometrics as Part of Financial Transparency?

With the Bernie Madoff being the latest in a series of massive financial frauds caused by leaders who misrepresented themselves, the time may have come to broaden the financial world’s definition of “transparency”.  I’d like to offer a broader view to include publicly reported reports on leadership knowledge, skills, abilities, traits, values and interests.  Would you have invested in Madoff’s Ponzie scheme if you had previously reviewed a report from a trusted authority on leadership assessment that noted he is low on conscientiousness and prudence?  How would a board view this same report on a founder-CEO?

How well do you know your leaders?

How well do you know your leaders?

Poor leadership is common, but leaders rarely fail in such a public way.  In one study of nearly 400 Fortunte 1000 companies, 47% of executives and managers rated their company’s overall leadership as fair or poor; and only 8% rated it as excellent (Csoka, 1998).  Personality traits predict both performance and ineffective leadership.  For example, conscientiousness is one of the “Big 5” factors of normal personality that has been shown to consistently predict both job performance and dishonest behavior in the worklpace. Former professors of mine, Robert and Joyce Hogan have written extensively about this area, and have authored some of the better classical test theory instruments for normal personality, the “dark side” or disfunctional leadership, and leader motives, values and preferences.  None of these sorts of assesments are typically used systematically to plan CEO development in private by the board.  And it is entirely unheard of for these reports to be shared publicly with prospective customers, partners and shareholders.  Perhaps we should reconsider making these transparent, systematically, given the risk and lack of confidence in markets of late?   The free paper I drafted, “The Three Stooges of Operational Risk: Advances in Leadership Due Diligence and Rasch Measurement” proposes a way of improving our leadership assessments.  If desired, they could be used for this transparency purpose.   I welcome your feedback.

Special thanks to Alexei M for inspiring this idea.

References

Csoka, L. S. (1998).  Bridging the Leadership Gap.  New York: Conference Board.

Hogan, R., Curphy, G., & Hogan, J. (1994).  What We Know About Leadership: Effectiveness & Personality.  American Psychologist 49(6), 493-504.

Robie, C., Brown, D., & Bly, P. (2008, March).  Relationship Between Major Personality Traits and Managerial Performance: Moderating Effects of Derailing Traits.  International Journal of Management, 25(1), 131-139.

Madoff Destroys $50 Billion with “Giant Ponzie Scheme”

Bernie Madoff is the latest in the series of senior executives to destroy value, this time with an apparent $50 billion dollar fraud, according to the Financial Times.  Madoff, a former Chairman of the NASDAQ stock market, on thursday admitted to his employees including his two sons that his operations were “all just one big lie” and “basically, a giant Ponzi scheme”.  The alleged fraud is the largest ever investor fraud ever blamed on a single individual.

Previously, I had written about the “Three Stooges of Operational Risk“, where I detailed senior executive destruction from Key Lay of Enron, Bernie Evers of Worldcom and most recently, Dick Fuld‘s follies with Lehman Brothers.  In two of those three I noted the dishonesty and fraud that accounted for their downfall similar to Madoff.  But unlike Madoff, they were less candid about thair fraud.  After Madoff’s brazen alleged admission, is there any uncertainty that leadership due dilligence is a critical part of the selection process of hiring senior executives?  Could it be any more clear that the pre-hire assessment procedure is a non-trivial subset of Enterprise Risk Management?

In fairness, these Industrial Organizational Psychology methods have their limitations.  No forecast could ever be perfect, or and even the best assessment procedures only account for 30-60% of the variance in job performance.  But it’s relatively rare that factors such as conscientiousness are used to screen executives – and conscientiousness highly predicts dishonest, and imprudent behavior in the workplace like that of Madoff.  With new methods from Rasch Measurement, Computer-Adaptive Testing, and an innovation from the Scientific Leader, “Inverted Computer Adaptive Testing” using Virtual Realtity, it’s increasingly difficult for people to fake or misrepresent themselves on these assessments. 

How much risk are you accepting when you use standard interviews to hire your employees?

, , , , , , , , , , , , ,

Top Conference Salutes The Scientific Leader

The Scientific Leader is pleased to announce that three submissions to the 2009 Society for Industrial-Organizational Psychology (SIOP) conference have been accepted.

The Society for Industrial-Organizational Psychology (SIOP) is the premier professional association for scientists and practitioners of human behavior in the workplace.  Each year, they hold a popular conference with peer-reviewed articles and symposia.  The number of proposed sessions always far outstrips the number of available places, and so the standards for acceptance are relatively steep by the peer-review group who decides on placement.

Three submissions were accepted for presentation at the next conference, to be held in New Orleans

1.  Enhancing Utility Analysis: Introducing the Cue See Model

I’m particularly proud of this paper, as it represents a new approach to asset valuation, both tangible and intangible.  While traditional I/O Psychology has its’ own tradition to quantify the value of human performance called Utility Analysis, it largely does not include any of the other organizational sciences’ ideas.  My paper tries to synthesize finance, psychology, industrial/systems engineering, computational organizational theory, and computer science ideas into the “Cue See Model”.  The hope is that the approach can be useful to managers and theoreticians by helping to specificy how a company creates profit across levels.  Once understood, then the Cue See Model can be used to track it, objectively, without relying on subjective human ratings.  My hope is that future studies will empirically demonstrate the models’ efficacy, and help avoid the fields traditional problem of measuring and monetizing outcomes.

2.  Succession Planning: Beyond Manager Nomitations

Led by our parent company, Human Capital Growth’s Dr. Shreya Sarkar-Barney, this panel discussion will include experts on leadership discussing the use of psychometric assessment instruments and other science-based practice methods for succession.  Panelists include:

Shreya Sarkar-Barney, Human Capital Growth, Chair
Matt Barney, Infosys, Panelist
Eric Braverman, Merck, Panelist
Lori Homer, Microsoft, Paelist
Jennifer Irwin, Proctor & Gamble Company, Panelist
Kevin Veit, Gabbard and Co, Panelist

3. The Role of IO Psychology in Resolving the Healthcare Crisis

I was asked to be the discussant – a senior leader with expertise in the area to comment on all the papers in the session ,as I was the Chief Learning Officer & VP for Sutter Health previously.  It will focus on interventions
targeted at improving outcomes related to quality of patient care. The
interventions to be covered focus on selection, leadership and culture, team
training, safety, and others. The session will represent research on various
levels of the organization, including management, nurses, and frontline staff.

Kristin Charles, Kronos Talent Management, Co-Chair
David Scarborough, Kronos Talent Management/Black Hills State U., Co-Chair
Justin Rossini, DDI, Inc., Author
Sallie Weaver, Univsersity of Central Florida and MedAxiom, Author
David Hofmann, Univ of North Carolina at Chapel Hill, Author
Matt Barney, Infosys, Discussant

The papers I’ll be reviewing in this session include:
Defining quality of care: Behavioral competency models across nursing
departments:
Kristin Charles, Autumn Krauss
Addressing Care Quality, Engagement, and Retention Likelihood: a Selection
Perspective:
Justin Rossini
Can Team Training Improve Operating Room Quality of Care?: Sallie
Weaver, Michael Rosen, Deborah DiazGranados, Rebecca Lyons, Elizabeth Lazzara,
Andrea Barnhard, Eduardo Salas
Leadership Levers to Motivate Error Management: David Hofmann, Adam
Grant

I hope some of the readers of this blog are able to attend this excellent conference, and if you are, please comment below or send me a note (matt at scientificleader.com) so I’ll get to meet you.

Electronic Health Information Systems Natural For Rasch Computer Adaptive Testing

I was pleased to discover a new instrument in the Journal, Psychiatry that uses Rasch Measurement to better assess Catatonia.  Catatnonia is a psychiatric condition that groups a number of pathologies ranging from bipolar disorder to drug abuse and schizophrenia.  The authors use a partial-credit Rasch approach that allows for ratings along a behaviorally-anchored rating scale. An example of one of the scale’s partial-credit items appears here:

This research did not benefit from the Electronic Health Record (EHR) or Electronic Health Information Systems that are universally starting to be used in healthcare.  It seems inevitable that as EHRs become ubiquitous in healthcare, that computer-adaptive Rasch scales like the author’s new KANNER scale will be come commonplace.  Whether for patient pain measurement and consequent medication, or psychopathology, or occupational therapy/science, Rasch Measurement has a long history of successful use in these areas – but without the benefit of seamless integration in a CAT in your health record.  I foresee CAT-based assessments being combined with Statistical Process Control (SPC) and Fuzzy Logic to better track patient conditions over time.  In hospital settings, I suspect this will happen first as patients with acute and potentially fatal ailments already are tracked continuously.  But non-acute patients, through wireless devices, could easily benefit from these approaches.  This is particularly true if medicine were to use one of the patents I authored while at Motorola, still pending, that combines computer-adaptive assessment and wireless devices.

This is a good example of the power of interdisciplinary innovations combining for something significantly better.  Rasch Measurement comes from Psychometrics; Statistical Process Control was developed by Industrial/Systems Engineers and Statisticians; Wireless technology comes from Computer Scientists; and the content comes from the various health sciences including medicine, nursing, physical therapy and occupational therapy.

Thousands of UK Patients Flee Government Healthcare

Since the British government started allowing private medicine in the UK in 2006, the numbers of citizens choosing private hospitals has risen 10-fold, to over 3,500 per month, according to the BBC.  While this still represents less than 1% of overall non-emergency treatment, the trend shows that people in the UK want the free market quality and service and are choosing something other than government health care.  The catch is that the private hospitals must agree to NHS pricess, but thus far 147 have agreed to these terms.  It does beg the question – how much better would UK healthcare be if this were completely liberated?

While the British concern is that NHS will loose government money, the real concern is the health and well being of people.  The reality is that medical tourism is already on the rise globally, and smart British shoppers are already getting treatments in India, Mexico, Thailand, and other lower-cost, high quality locations.  Having worked for 3 years in US quasi-socialized healthcare, and seen the horrors of government control of healthcare on everything from micromanaging physician relationships to attorneys being involved in nearly everything; the UK trend away from government is a good thing for Scientific Leaders in English healthcare.  Will the US learn from this lesson?

, , , , , , , , , ,

Utah Leadership Supports Computer-Adaptive Testing In Spite of “No Bureaucrat Left Behind” Act

Nine schools in Utah have found the benefits of Computer-Adaptive Testing to trump older methods.  Adaptive tests change to match a student’s skill level, avoiding wasted time and effort on questions that are far below or above their proficiency level.  They’re also at least 20% shorter.  This allows for periodic reassessment, and personalized focus on the specific curricular areas a learner needs to work on.  Each student is treated as a special, unique person.

But the US Federal Government’s Department of Education is behind the times, and making it difficult for Utah to use the modern psychometric methods, according to Utah’s Daily Herald.  The “No Child Left Behind Act” requires outdated, non-adaptive methods to be used in addition to the modern approaches.  While on the surface, the DoE’s request for peer review is something that is good, in applied settings, it’s rarely used.  The instruments I’ve developed would certainly pass the scrutiny of my peers, and the feedback they give is useful.  But these extra steps are typically unnecessary to ensure that instrumentation is useful, as long as professionals develop the Computer-Adaptive Tests.  It’s downright destructive to children for the federal government to force Utah to use outdated, longer, and less precise measures of learning.  While I presume those favored by Washington are “peer reviewed”, I suspect that the review committee is selected by those who are friends of politicians, and are likely unskilled in the recent developments in computer-adaptive measurement.

Fortunately, Utah appears to have visionary, contemporary leadership about steadfastly supporting good measurement to help children learn.  The Utah Legislature, the State School Board and Governor all approved the plan to continue to use it – and the Feds require the outdated assessments to be used as well.  This is both a hassle, unnecessary cost, and an opportunity cost – the children could have been spending the time they’ll take on the DoE tests on learning something new.  Are you a visionary leader like the folks in Utah?  More by The Scientific Leader on Computer-Adaptive Measurement, applied to organizations and business is free here.

, , , , , , , , , , , ,

Clash of the Psychometric Titans: Rasch and IRT

Does the approach you take to measuring your customers, employees or patients really matter?  If bad decisions are costly, then yes it really does.  As an Industrial/Organizational Psychologist, I was taught about the science of human measurement.  This included historical treatment of “true score” or classical test theory and also Item Response Theory (IRT).   Classical approaches are slow to create, and require comparisons with others (“norms”) to make sense of them.  But when you measure temperature with a thermometer, do you need to know the distribution of other thermometers in the area for it to make sense?  No – physical and biological sciences have a long history of successful measurement prior to Social Sciences pseudo-measurement approach. 

IRT is relatively better than Classical Test Theory, however, it violates some of the physical science axioms for measurement, is very complex, requires large sample sizes, and produces weird results that are nearly impossible to explain to the untrained.

Unfortunately, I/O Psychologists like me don’t get training in Rasch Measurement, other than believing that it’s the same as the simplest form of IRT.  This isn’t accurate – rather, these are two competing paradigms, and 12 years past my Ph.D., I’ve decided to dedicate myself to Rasch Measurement for both practical and scientific reasons. 

Practical Benefits of Rasch:

  • Smaller, unrepresentative samples are sufficiently useful
  • Accuracy and precision have same meaning in psychology as in physics and biology.  This makes it easier to communicate with physical and biological-science colleagues
  • People and items are on the same “ruler”.  For development, this is extremely useful to help focus learning on areas that are “just right” and not too hard or easy.
  • Raw scores are sufficient.  When communicating the results of a test, quiz, or assessment, this is essential.  With IRT, you can have a lower raw score but have a higher result – good luck explaining that to parents and juries.

After taking two terrific classes from Mike Linacre, I changed my mind and switched all my practice and science to Rasch.  Statistics.com still has the classes, both in the basics, advanced methods and the extremely powerful Many-Facet Rasch Measurement that he invented.   While the class format is a bit awkward and highly self paced, Linacre is an enthusiastic and responsive teacher.  Highly recommended.

, , , , , , , ,