Showing posts with label Culture. Show all posts
Showing posts with label Culture. Show all posts

Friday, July 29, 2022

A Lesson from the Accounting Profession: Don’t Cheat on the Ethics Test

SEC Order

Accounting, like many professions, requires practitioners to regularly demonstrate competence and familiarity with relevant knowledge and practices.  One requirement for Certified Public Accountants (CPAs) is to take an on-line, multiple-choice test covering professional ethics.  Sounds easy but the passing grade is relatively high so it’s not a slam dunk.  Some Ernest & Young (EY) audit accountants found it was easier to pass if they cheated by using answer keys and sharing the keys with their colleagues.  They were eventually caught and got into big trouble with the U.S. Securities and Exchange Commission (SEC).  Following is a summary of the scandal as it evolved over time per the SEC order* and our view on what the incident says about EY’s culture.

During 2012-15, some EY employees were exploiting weaknesses in the company’s test software to pass tests despite not having a sufficient number of correct answers.  EY learned about this problem in 2014.  In 2016, EY learned that professionals in one office improperly shared answer keys.  EY repeatedly warned personnel that cheating on tests was a violation of the firm’s code of ethics but did not implement any additional controls to detect this misconduct.  The cheating continued into 2021.

In 2019 the SEC discovered cheating at another accounting firm and fined them $50 million.  As part of the SEC’s 2019 investigation, the agency asked EY if they had any problems with cheating.  In their response, EY said they had uncovered instances in the past but implied they had no current problems.  In fact, EY management had recently received a tip about cheating and initiated what turned out to be an extensive investigation that by late 2019 “confirmed that audit professionals in multiple offices cheated on CPA ethics exams.” (p. 6)  However, EY never updated their response to the SEC.  Eventually EY told the Public Company Accounting Oversight Board (PCAOB)** about the problems, and the PCAOB informed the SEC – 9 months after the SEC’s original request for information from EY.

In the U.S., the relationship between government regulators and regulated entities is based on the expectation that communications from the regulated entities will be complete, truthful, and updated on a timely basis if new information is discovered or developed.  Lying to or misleading the government, either through commission or omission, is a serious matter.

Because of EY’s violation of a PCAOB rule and EY’s misleading behavior with the SEC, the company was censured, fined $100 million, and required to implement a host of corrective actions, summarized below.

Review of Policies and Procedures

“EY shall evaluate . . . the sufficiency and adequacy of its quality controls, policies, and procedures relevant to ethics and integrity and to responding to Information Requests” (p. 9)  In particular, EY will evaluate “whether EY’s culture [emphasis added] is supportive of ethical and compliant conduct and maintaining integrity, including strong, explicit, and visible support and commitment by the firm’s management” (p. 10)

Independent Review of EY’s Policies and Procedures

“EY shall require that the Policies and Procedures IC [Independent Consultant] conduct a review of EY’s Policies and Procedures to determine whether they are designed and being implemented in a manner that provides reasonable assurance of compliance with all professional standards . . . . EY shall adopt, as soon as practicable, all recommendations of the Policies and Procedures IC in its report. . . . EY’s Principal Executive Officer must certify to the Commission staff in writing that (i) EY has adopted and has implemented or will implement all recommendations of the Policies and Procedures IC in its report . . .” (pp. 10-12)

Independent Review of EY’s Disclosure Failures

“EY’s Special Review Committee shall require that the Remedial IC conduct a review . . . of EY’s conduct relating to the Commission staff’s June 2019 Information Request, including whether any member of EY’s executive team, General Counsel’s Office, compliance staff, or other EY employees contributed to the firm’s failure to correct its misleading submission.” (p. 12)  Like the Policies and Procedures review, EY must adopt the recommendations in the Remedial IC Report and EY’s Principal Executive Officer must certify their adoption to the SEC.

Notice to Audit Clients, Training, and Certifications

“Within 10 business days after entry of this Order, EY shall provide all of its issuer audit clients and SEC-registered broker-dealer audit clients a copy of this Order. . . . all audit professionals and all EY partners and employees who, at any time prior to March 3, 2020, were aware (i) of the Division of Enforcement’s June 19, 2019 request, (ii) of EY’s June 20, 2019 response, and (iii) that an employee had made a tip on June 19, 2019 concerning cheating shall complete a minimum of 6 hours every 6 months of ethics and integrity training by an independent training provider . . . . EY’s Principal Executive Officer shall also certify that the training requirements . . . have been completed.” (pp. 14-15)

Our Perspective

A company’s culture includes the values and assumptions that underlie daily work life and influence decision making.  What can we infer about EY’s culture from the behavior described above?

First, what managers did after they discovered the cheating – issuing memos and waving their arms – did not work.  Even if EY terminated some employees, perhaps the worst offenders or maybe the least productive ones, EY did not make their testing process more robust or secure.

Second, senior leadership has not suffered from this scandal.  There is no indication any senior managers have been disciplined or terminated because of the misconduct.  The head of EY’s U.S. operations left at the end of her 4-year term, but her departure was apparently due to a disagreement with her boss, EY’s global chief executive. 

Third, there has been no apparent change in the employees’ task environments, e.g., their workload expectations and compensation program.

Conclusion: EY management tolerated the cheating because their more important priorities were elsewhere.  It’s safe to assume that EY, like other professional service firms, primarily values and rewards technical competence and maximizing billable hours.

We see two drivers for possible changes: the $100 million fine and the mandated review by “Independent Consultants.”  (EY’s self-review will likely be no more useful than their previous memos and posturing.)

What needs to be done? 

To begin, senior leadership has to say fixing the cheating problem is vitally important, and walk the talk by adjusting company practices to reinforce the task’s importance.  Leadership has to commit to a company corrective action program that recognizes, analyzes, and permanently fixes all significant company problems as they arise – not after their noses are rubbed into action by the regulator.  

In addition, there have to be visible changes in the audit professionals’ task environment.  The employees need to get work time, in the form of unbilled overhead hours, to prepare for tests.  The compensation scheme needs to add a component to recognize and reward ethical behavior – with clients and internally.  The administration of ethics tests needs to be made more secure, on a par with the accounting exams the employees take.


*  Securities and Exchange Commission, Other Release No.: 34-95167 Re: Ernst & Young LLP (June 28, 2022).  All quotes in our post are from the SEC order.  There is also an associated SEC press release.

**  The Public Company Accounting Oversight Board establishes auditing and professional practice standards for registered public accounting firms, such as EY, to follow in the preparation of audit reports for public companies.  PCAOB members are appointed by the SEC.

Thursday, March 31, 2022

The Criminalization of Safety in Healthcare?


On March 25, 2022 a former nurse at Vanderbilt University Medical Center (VUMC) was convicted of gross neglect of an impaired adult and negligent homicide as a consequence of a fatal drug error in 2017.* 

Criminal prosecutions for medical errors are rare, and healthcare stakeholders are concerned about what this conviction may mean for medical practice going forward.  A major concern is practitioners will be less likely to self-report errors for fear of incriminating themselves.

We have previously written about the intersection of criminal charges and safety management and practices.  In 2016 Safetymatters’ Bob Cudlin authored a 3-part series on this topic.  (See his May 24, May 31, and June 7 posts.)  Consistent with our historical focus on systems thinking, Bob reviewed examples in different industries and asked “where does culpability really lie - with individuals? culture? the corporation? or the complex socio-technical systems within which individuals act?”

“Corporations inherently, and often quite intentionally, place significant emphasis on achieving operational and business goals.  These goals at certain junctures may conflict with assuring safety.  The de facto reality is that it is up to the operating personnel to constantly rationalize those conflicts in a way that achieves acceptable safely.”

We are confident this is true in hospital nurses’ working environment.  They are often short-staffed, working overtime, and under pressure from their immediate task environments and larger circumstances such as the ongoing COVID pandemic.  The ceaseless evolution of medical technology means they have to adapt to constantly changing equipment, some of which is problematic.  Many/most healthcare professionals believe errors are inevitable.  See our August 6, 2019 and July 31, 2020 posts for more information about the extent, nature, and consequences of healthcare errors.

At VUMC, medicines are dispensed from locked cabinets after a nurse enters various codes.  The hospital had been having technical problems with the cabinets in early 2017 prior to the nurse’s error.  The nurse could not obtain the proper drug because she was searching using its brand name instead of its generic name.  She entered an override that allowed her to access additional medications and selected the wrong one, a powerful paralyzing agent.  The nurse and other medical personnel noted that entering overrides on the cabinets was a common practice.

VUMC’s problems extended well beyond troublesome medicine cabinets.  An investigator said VUMC had “a heavy burden of responsibility in this matter.”  VUMC did not report the medication error as required by law and told the local medical examiner’s office that the patient died of “natural” causes.  VUMC avoided criminal charges because prosecutors didn’t think they could prove gross negligence. 

Our Perspective

As Bob observed in 2016, “The reality is that criminalization is at its core a “disincentive.”  To be effective it would have to deter actions or decisions that are not consistent with safety but not create a minefield of culpability. . . .  Its best use is probably as an ultimate boundary, to deter intentional misconduct but not be an unintended trap for bad judgment or inadequate performance.”

In the instant case, the nurse did not intend to cause harm but her conduct definitely reflected bad judgment and unacceptable performance.  She probably sealed her own fate when she told law enforcement she “probably just killed a patient” and the licensing board that she had been “complacent” and “distracted.”   

But we see plenty of faults in the larger system, mainly that VUMC used cabinets that held dangerous substances and had a history of technical glitches but allowed users to routinely override cabinet controls to obtain needed medicines.  As far we can tell, VUMC did not implement any compensating safety measures, such as requiring double checking by a colleague or a supervisor’s presence when overrides were performed or “dangerous” medications were withdrawn.

In addition, VUMC’s organizational culture was on full display with their inadequate and misleading reporting of the patient’s death.  VUMC has made no comment on the nurse’s case.  In our view, their overall strategy was to circle the wagons, seal off the wound, and dispose of the bad apple.  Nothing to see here, folks.

Going forward, the remaining VUMC nurses will be on high alert for awhile but their day-to-day task demands will eventually force them to employ risky behaviors in an environment that requires such behavior to accomplish the mission but lacks defense in depth to catch errors before they have drastic consequences.  The nurses will/should be demanding a safer work environment.

Bottom line: Will this event mark a significant moment for accountability in healthcare akin to the George Floyd incident’s impact on U.S. police practices?  You be the judge.

For additional Safetymatters insights click the healthcare label below.

 

*  All discussion of the VUMC incident is based on reporting by National Public Radio (NPR).  See B. Kelman, “As a nurse faces prison for a deadly error, her colleagues worry: Could I be next?” NPR, March 22, 2022; “In Nurse’s Trial, Investigator Says Hospital Bears ‘Heavy’ Responsibility for Patient Death,” NPR, March 24, 2022; “Former nurse found guilty in accidental injection death of 75-year-old patient,” NPR, March 25, 2022.

Monday, November 9, 2020

Setting the Bar for Healthcare: Patient Care Goals from the Joint Commission

Joint Commission HQ
The need for a more effective safety culture (SC) in the field of healthcare is acute: every year tens of thousands of patients are injured or unnecessarily die while in U.S. hospitals. The scope of the problem became widely known known with the publication of “To Err is Human: Building a Safer Health System”* in 2000. This report included two key observations: (1) the cause of the injuries and deaths is not bad people in health care, rather the people are working in bad systems that need to be made safer and (2) legitimate liability concerns discourage the reporting of errors, which means less feedback to the system and less learning from mistakes.

It's 20 years later. Is the healthcare system safer than it was in 2000? Yes. Is safety performance at a satisfactory level? No.

For evidence, we need look no further than a Nov. 18, 2019 blog post** byDr. Mark Chassin, president and CEO of the Joint Commission (JC), the entity responsible for establishing standards for healthcare functions and patient care, and evaluating, accrediting, and certifying healthcare organizations based on their compliance with the standards.

Dr. Chassin summarized the current situation as follows: “The health care industry has directed a substantial amount of time, effort, and resources at solving the problems, and we have seen some progress. That progress has typically occurred one project at a time, with hard-working quality professionals applying a “one-size-fits-all” best practice to address each problem. The resulting improvements have been pretty modest, difficult to sustain, and even more difficult to spread.”

Going forward, he says the industry can make substantial progress by committing to zero harm, overhauling the organizational culture, and utilizing proven process improvement techniques. He singles out the aviation and nuclear power industries for having similar commitments.

But achieving substantial, sustained improvement is a big lift. To get a feel for how big, let's look at the 2020 goals and strategies the JC has established for patient care in hospitals, in other words, where the performance bar is set today.*** We will try to inform your own judgment about their scope and sufficiency by comparing them with corresponding activities in the nuclear power industry.

1. Identify patients correctly by using at least two ways to identify them.

This is a major challenge in a hospital where many patients are entering and leaving the system every day, being transferred to and from different departments, and being treated by multiple individuals who have different roles and ranks, and are treating patients at different levels of intensity for different periods of time. There is really no analogue in the closed, controlled personnel environment of a power plant.

2. Improve staff communication by getting important test results to the right staff person on time.

This should be a familiar challenge to people in any organization, including a power plant, where functions may exist in different organizational silos with their own procedures, vocabulary, and priorities.

3. Use medicines safely by labeling medicines that are not labeled, taking extra care with patients on blood thinners, and managing patients' medicine records for accuracy, completeness, and possible interactions.

This is similar to requirements to accurately label, control, and manage the use of all chemicals used in an industrial facility.

4. Use alarms safely by ensuring that alarms on medical equipment are heard and responded to on time.

In a hospital, it is a problem when multiple alarms are going off at the same time, with differing degrees of urgency for personnel attention and response. In power plants, operators have been known to turn off alarms that are reporting too many false positives. These situations call out for operating and maintenance standards and practices that ensure all activated alarms are valid and deserving of a response.

5. Prevent infection by adhering to Centers for Disease Control or World Health Organization hand cleaning guidelines.

The aim is to keep bad bugs from circulating. Compare this prctice to the myriad procedures, personnel, and equipment dedicated to ensuring nuclear power plant radioactivity is kept in an identified, controlled, and secure environment.

6. Identify patient safety risks by reducing the risk for suicide.

Compare this with the wellness, fitness for duty, and behavioral observation programs at every nuclear power plant.

7. Prevent mistakes in surgery by making sure that the correct surgery is done on the correct patient and at the correct place on the patient’s body, and pausing before the surgery to make sure that a mistake is not being made.

This is similar to tailgate meetings before maintenance activities and using the STAR (Stop-Think-Act-Review) approach before and during work. Think of the potential for error in mirror-image plants; people are bi-lateral but subject to the similar risks.

Our Perspective

The JC's set of goals is thin gruel to show after 20 years. In our view, efforts to date reflect two major shortcomings: a lack of progress in defining and strengthening SC, and a lack of any shared understanding of what the relevant system consists of, how it functions, and how to improve it.

Safety Culture

Our July 31, 2020 post on When We Do Harm by Dr. Danielle Ofri discussed the key attributes for a strong healthcare SC, i.e., one where the probability of errors is much lower than it is today. In Ofri's view, the primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done, even if it's not in their specific job description, amid a diffusion of responsibility in their task environment. Secondly, all members of the organization, regardless of status, should have the ability (or duty even) to point out problems and errors without fear of retribution. The culture should regard reporting an adverse event as a routine and ordinary task. Third, organizational leaders, including but not limited to senior managers, must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true. There should be deference to proven expertise and widely held authority to say “stop” when problems become apparent.

The Healthcare System

The healthcare system includes the providers, the supporting infrastructure, external environmental factors, e.g., regulators and insurance companies, the patients and their families, and all the interrelationships and dynamics between these components. An important dynamic is feedback, where the quality and quantity of output from one component influences performance in other system components. System dynamics create homeostasis, fluctuations, and all levels of performance from superior to failure. Other organizational variables, e.g., management decision-making practices and priorities, and the compensation scheme, provide context for system functioning. For more on system attributes, please see our Oct.9, 2019 post or click the healthcare label.

Bottom line: Compare the JC's efforts with the vast array of safety and SC-related policies, procedures, practices, activities, and dedicated personnel in your workplace. Healthcare has a long way to go.


* Institute of Medicine (L.T. Kohn et al), “To Err Is Human: Building a Safer Health System” (Washington, D.C.: The National Academies Press) 2000. Retrieved Nov. 5, 2020.

** M. Chassin, “To Err is Human: The Next 20 Years,” blog post (Nov. 18, 2019).  Retrieved Nov. 1, 2020.

*** The Joint Commission, “2020Hospital National Patient Safety Goals,” simplified version (July, 2020). Retrieved Nov. 1, 2020.


Friday, July 31, 2020

Culture in Healthcare: Lessons from When We Do Harm by Danielle Ofri, MD

In her book*, Dr. Ofri takes a hard look at the prevalence of medical errors in the healthcare system.  She reports some familiar statistics** and fixes, but also includes highly detailed case studies where errors large and small cascaded over time and the patients died.  This post summarizes her main observations.  She does not provide a tight summary of a less error-prone healthcare culture but she drops enough crumbs that we can infer its desirable attributes.

Healthcare is provided by a system

The system includes the providers, the supporting infrastructure, and factors in the external environment.  Ofri observes that medical care is exceedingly complicated and some errors are inevitable.  Because errors are inevitable, the system should emphasize error recognition and faster recovery with a goal of harm reduction.

She shares our view that the system permits errors to occur so fixes should focus on the system and not on the individual who made an error.***  System failures will eventually trap the most conscientious provider.  She opines that most medical errors are the result of a cascade of actions that compound one another; we would say the system is tightly coupled.

System “improvements” intended to increase efficiency can actually reduce effectiveness.  For example, electronic medical records can end up dictating providers’ practices, fragmenting thoughts and interfering with the flow of information between doctor and patient.****  Data field defaults and copy and paste shortcuts can create new kinds of errors.  Diagnosis codes driven by insurance company billing requirements can distort the diagnostic process.  In short, patient care becomes subservient to documentation.

Other changes can have unforeseen consequences.  For example, scheduling fewer working hours for interns leads to fewer diagnostic and medication errors but also results in more patient handoffs (where half of adverse medical events are rooted.)    

Aviation-inspired checklists have limited applicability

Checklists have reduced error rates for certain procedures but can lead to unintended consequences, e.g., mindless check-off of the items (to achieve 100% completion in the limited time available) and provider focus on the checklist while ignoring other things that are going on, including emergent issues.

Ofri thinks the parallels between healthcare and aviation are limited because of the complexity of human physiology.  While checklists may be helpful for procedures, doctors ascribe limited value to process checklists that guide their thinking.

Malpractice suits do not meaningfully reduce the medical error rate

Doctors fear malpractice suits so they practice defensive medicine, prescribing extra tests and treatments which have their own risks of injury and false positives, and lead to extra cost.  Medical equipment manufacturers also fear lawsuits so they design machines that sound alarms for all matters great and small; alarms are so numerous they are often simply ignored by the staff.

Hospital management culture is concerned about protecting the hospital’s financial interests against threats, including lawsuits.  A Cone of Silence is dropped over anything that could be considered an error and no information is released to the public, including family members of the injured or dead patient.  As a consequence, it is estimated that fewer than 10% of medical errors ever come to light.  There is no national incident reporting system because of the resistance of providers, hospitals, and trial lawyers.

The reality is a malpractice suit is not practical in the vast majority of cases of possible medical error.  The bar is very high: your doctor must have provided sub-standard care that caused your injury/death and resulted in quantifiable damages.  Cases are very expensive and time-consuming to prepare and the legal system, like the medical system, is guided by money so an acceptable risk-reward ratio has to be there for the lawyers.***** 

Desirable cultural attributes for reducing medical errors

In Ofri’s view, culture includes hierarchy, communications skill, training traditions, work ethic, egos, socialization, and professional ideals.  The primary cultural attribute for reducing errors is a willingness of individuals to assume ownership and get the necessary things done amid a diffusion of responsibility.  This must be taught by example and individuals must demand comparable behavior from their colleagues.

Providing medical care is a team business

Effective collaboration among team members is key, as is the ability (or duty even) of lower-status members to point out problems and errors without fear of retribution.  Leaders must encourage criticism, forbid scapegoating, and not allow hierarchy and egos to overrule what is right and true.  Where practical, training should be performed in groups who actually work together to build communication skills.

Doctors and nurses need time and space to think

Doctors need the time to develop differential diagnosis, to ask and answer “What else could it be?”  The provider’s thought process is the source of most diagnostic error, and subject to explicit and implicit biases, emotions, and distraction.  However, stopping to think can cause delays which can be reported as shortcomings by the tracking system.  The culture must acknowledge uncertainty (fueled by false positives and negatives), address overconfidence, and promote feedback, especially from patients.

Errors and near misses need to be reported without liability or shame.

The culture should regard reporting an adverse event as a routine and ordinary task.  This is a big lift for people steeped in the hierarchy of healthcare and the impunity of its highest ranked members.  Another factor to be overcome is the reluctance of doctors to report errors because of their feelings of personal and professional shame.

Ofri speaks favorably of a “just culture” that recognizes that unintentional error is possible, but risky behavior like taking shortcuts requires (system) intervention, and negligence should be disciplined.  In addition, there should not be any bias in how penalties are handed out, e.g., based on status.

In sum, Ofri says healthcare will always be an imperfect system.  Ultimately, what patients want is acknowledgement of errors and apology for them from doctors.

Our Perspective

Ofri’s major contribution is her review of the evidence showing how pervasive medical errors are and how the healthcare industry works overtime to deny and avoid responsibility for them.

Her suggestions for a safer healthcare culture echo what we’ve been saying for years about the attributes of a strong safety culture.  Reducing the error rates will be hard for many reasons.  For example, Ofri observes medical training forges a lifelong personal identity and reverence for tradition; in our view, it also builds in resistance to change.  The biases in decision making that she mentions are not trivial.  For one discussion of such biases, see our Dec. 18, 2013 review of Daniel Kahneman’swork.

Bottom line: After you read this, you will be clutching your rosary a little tighter if you have to go to a hospital for a major injury or illness.  You are more responsible for your own care than you think.


*  D. Ofri, When We Do Harm (Boston: Beacon Press, 2020).

**  For example, a study reporting that almost 4% of hospitalizations resulted in medical injury, of which 14% were fatal, and doctors’ diagnostic accuracy is estimated to be in the range of 90%.

***  It has been suggested that the term “error” be replaced with “adverse medical event” to reduce the implicit focus on individuals.

****  Ofri believes genuine conversation with a patient is the doctor’s single most important diagnostic tool.

***** As an example of the power of money, when Medicare started fining hospitals for shortcomings, the hospitals started cleaning up their problems.

Monday, June 29, 2020

A Culture that Supports Dissent: Lessons from In Defense of Troublemakers by Charlan Nemeth

Charlan Nemeth is a psychology professor at the University of California, Berkeley.  Her research and practical experience inform her conclusion that the presence of authentic dissent during the decision making process leads to better informed and more creative decisions.  This post presents highlights from her 2018 book* and provides our perspective on her views.

Going along to get along

Most people are inclined to go along with the majority in a decision making situation, even when they believe the majority is wrong.  Why?  Because the majority has power and status, most organizational cultures value consensus and cohesion, and most people want to avoid conflict. (179)

An organization’s leader(s) may create a culture of agreement but consensus, aka the tyranny of the majority, gives the culture its power over members.  People consider decisions from the perspective of the consensus, and they seek and analyze information selectively to support the majority opinion.  The overall effect is sub-optimal decision making; following the majority requires no independent information gathering, no creativity, and no real thinking. (36,81,87-88)

Truth matters less than group cohesion.  People will shape and distort reality to support the consensus—they are complicit in their own brainwashing.  They will willingly “unknow” their beliefs, i.e., deny something they know to be true, to go along.  They live in information bubbles that reinforce the consensus, and are less likely to pay attention to other information or a different problem that may arise.  To get along, most employees don’t speak up when they see problems. (32,42,98,198)

“Groupthink” is an extreme form of consensus, enabled by a norm of cohesion, a strong leader, situational stress, and no real expectation that a better idea than the leader’s is possible.  The group dynamic creates a feedback loop where people repeat and reinforce the information they have in common, leading to more extreme views and eventually the impetus to take action.  Nemeth’s illustrative example is the decision by President John Kennedy and his advisors to authorize the disastrous Bay of Pigs invasion.** (140-142)

Dissent adds value to the decision making process

Dissent breaks the blind following of the majority and stimulates thought that is more independent and divergent, i.e., creates more alternatives and considers facts on all sides of the issue.  Importantly, the decision making process is improved even when the dissenter is wrong because it increases the group’s chances of identifying correct solutions. (7-8,12,18,116,180) 

Dissent takes courage but can be contagious; a single dissenter can encourage others to speak up.  Anonymous dissent can help protect the dissenter from the group. (37,47) 

Dissent must be authentic, i.e., it must reflect the true beliefs of the dissenter.  To persuade others, the dissenter must remain consistent in his position.  He can only change because of new or changing information.  Only authentic, persistent dissent will force others to confront the possibility that they may be wrong.  At the end of the day, getting a deal may require the dissenter to compromise, but changing the minds of others requires consistency. (58,63-64,67,115,190)

Alternatives to dissent

Other, less antagonistic, approaches to improving decision making have been promoted.  Nemeth finds them lacking.

Training is the go to solution in many organizations but is not very effective in addressing biases or getting people to speak up to realities of power and hierarchies.   Dissent is superior to training because it prompts reconsidering positions and contemplating alternatives. (101,107)

Classical brainstorming incorporates several rules for generating ideas, including withholding criticism of ideas that have been put forth.  However, Nemeth found in her research that allowing (but not mandating) criticism led to more ideas being generated.   In her view, it’s the “combat between different positions that provides the benefits to decision making.” (131,136)

Demographic diversity is promoted as a way to get more input into decisions.  But demographics such as race or gender are not as helpful as diversity of skills, knowledge, and backgrounds (and a willingness to speak up), along with leaders who genuinely welcome different viewpoints. (173,175,200)

The devil’s advocate approach can be better than nothing, but it generally leads to considering the negatives of the original position, i.e., the group focuses on better defenses for that position rather than alternatives to it.  Group members believe the approach is fake or acting (even when the advocate really believes it) so it doesn’t promote alternative thinking or force participants to confront the possibility that they may be wrong.  The approach is contrived to stimulate divergent thinking but it actually creates an illusion that all sides have been considered while preserving group cohesion. (182-190,203-04)

Dissent is not free for the individual or the group

Dissenters are disliked, ridiculed, punished, or worse.  Dissent definitely increases conflict and sometimes lowers morale in the group.  It requires a culture where people feel safe in expressing dissent, and it’s even better if dissent is welcomed.  The culture should expect that everyone will be treated with respect. (197-98,209)

Our Perspective

We have long argued that leaders should get the most qualified people, regardless of rank or role, to participate in decision making and that alternative positions should be encouraged and considered.  Nemeth’s work strengthens and extends our belief in the value of different views.

If dissent is perceived as an honest effort to attain the truth of a situation, it should be encouraged by management and tolerated, if not embraced, by peers.  Dissent may dissuade the group from linear cause-effect, path of least resistance thinking.  We see a similar practice in Ray Dalio’s concepts of an idea meritocracy and radical open-mindedness, described in our April 17, 2018 review of his book Principles.  In Dalio’s firm, employees are expected to engage in lively debate, intellectual combat even, over key decisions.  His people have an obligation to speak up if they disagree.  Not everyone can do this; a third of Dalio’s new hires are gone within eighteen months.

On the other hand, if dissent is perceived as self-serving or tattling, then the group will reject it like a foreign virus.  Let’s face it: nobody likes a rat.

We agree with Nemeth’s observation that training is not likely to improve the quality of an organization’s decision making.  Training can give people skills or techniques for better decision making but training does not address the underlying values that steer group decision making dynamics. 

Much academic research of this sort is done using students as test subjects.***  They are readily available, willing to participate, and follow directions.  Some folks think the results don’t apply to older adults in formal organizations.  We disagree.  It’s easier to form stranger groups with students who don’t have to worry about power and personal relationships than people in work situations; underlying psychological mechanisms can be clearly and cleanly exposed.

Bottom line: This is a lucid book written for popular consumption, not an academic journal, and is worth a read. 


(Give me the liberty to know, to utter, and to argue freely according to conscience. — John Milton)


*  C. Nemeth, In Defense of Troublemakers (New York: Basic Books, 2018).

**  Kennedy learned from the Bay of Pigs fiasco.  He used a much more open and inclusive decision making process during the Cuban Missile Crisis.

***  For example, Daniel Kahneman’s research reported in Thinking, Fast and Slow, which we reviewed Dec. 18, 2013.

Wednesday, November 6, 2019

National Academies of Sciences, Engineering, and Medicine Systems Model of Medical Clinician Burnout, Including Culture Aspects

Source: Medical Academic S. Africa
We have been posting about preventable harm to health care patients, emphasizing how improved organizational mental models and attention to cultural attributes might reduce the incidence of such harm.  A new National Academies of Sciences, Engineering, and Medicine (NASEM) committee report* looks at one likely contributor to the patient harm problem: clinician burnout.**  The NASEM committee purports to use a systems model to analyze burnout and develop strategies for reducing burnout while fostering professional well-being and enhancing patient care.  This post summarizes the 300+ page report and offers our perspective on it.

The Burnout Problem and the Systems Model 


Clinician burnout is caused by stressors in the work environment; burnout can lead to behavioral and health issues for clinicians, clinicians prematurely leaving the healthcare field, and poorer treatment and outcomes for patients.  This widespread problem requires a “systemic approach to burnout that focuses on the structure, organization, and culture of health care.” (p. 3)

The NASEM committee’s systems model has three levels: frontline care delivery, the health care organization, and the external environment.  Frontline care delivery is the environment in which care is provided.  The health care organization includes the organizational culture, payment and reward systems, processes for managing human capital and human resources, the leadership and management style, and organizational policies. The external environment includes political, market, professional, and societal factors.

All three levels contribute to an individual clinician’s work environment, and ultimately boil down to a set of job demands and job resources for the clinician.

Recommendations

The report identifies multiple factors that need to be considered when developing interventions, including organizational values and leadership; a work system that provides adequate resources, facilitates team work, collaboration, communication, and professionalism; and an implementation approach that builds a learning organization, reward systems that align with organizational values, nurtures organizational culture, and uses human-centered design processes. (p. 7)

The report presents six recommendations for reducing clinician burnout and fostering professional well-being:

1. Create positive work environments,
2. Create positive learning environments,
3. Reduce administrative burdens,
4. Optimize the use of health information technologies,
5. Provide support to clinicians to prevent and alleviate burnout, and foster professional well-being, and
6. Invest in research on clinician professional well-being.

Our Perspective

We’ll ask and answer a few questions about this report.

Did the committee design an actual and satisfactory systems model?

We have promoted systems thinking since the inception of Safetymatters so we have some clear notions of what should be included in a systems model.  We see both positives and missing pieces in the NASEM committee’s approach.***

On the plus side, the tri-level model provides a useful and clear depiction of the health care system and leads naturally to an image of the work world each clinician faces.   We believe a model should address certain organizational realities—goal conflict, decision making, and compensation—and this model is minimally satisfactory in these areas.  A clinician’s potential goal conflicts, primarily maintaining a patient focus while satisfying the organization’s quality measures, managing limited resources, achieving economic goals, and complying with regulations, is mentioned once. (p. 54)  Decision making (DM) specifics are discussed in several areas, including evidenced-based DM (p. 25), the patient’s role in DM (p. 53), the burnout threat when clinicians lack input to DM (p. 101), the importance of participatory DM (pp. 134, 157, 288), and information technology as a contributor to DM (p. 201).  Compensation, which includes incentives, should align with organizational values (pp. 10, 278, 288), and should not be a stressor on the individual (p. 153).  Non-financial incentives such as awards and recognition are not mentioned.

On the downside, the model is static and two-dimensional.  The interrelationships and dynamics among model components are not discussed at all.  For example, the importance of trust in management is mentioned (p. 132) but the dynamics of trust are not discussed.  In our experience, “trust” is a multivariate function of, among other things, management’s decisions, follow-through, promise keeping, role modeling, and support of subordinates—all integrated over time.  In addition, model components feed back into one another, both positively and negatively.  In the report, the use of feedback is limited to clinicians’ experiences being fed back to the work designers (pp. 6, 82), continuous learning and improvement in the overall system (pp. 30, 47, 51, 157), and individual work performance recognition (pp. 103, 148).  It is the system dynamics that create homeostasis, fluctuations, and all levels of performance from superior to failure.

Does culture play an appropriate role in the model and recommendations?

We know that organizational culture affects performance.  And culture is mentioned throughout this report as a system component with the implication that it is an important factor, but it is not defined until a third of the way through the report.****  The NASEM committee apparently assumes everyone knows what culture is, and that’s a problem because groups, even in the same field, often do not share a common definition of culture.

But the lack of a definition doesn’t stop the authors from hanging all sorts of attributes on the culture tree.  For example, the recommendation details include “Nurture (establish and sustain) organizational culture that supports change management, psychological safety, vulnerability, and peer support.” (p. 7)  This is mostly related to getting clinicians to recognize their own burnout and seek help, and removing the social stigma associated with getting help.  There are a lot of moving parts in this recommendation, not the least of which is overcoming the long-held cultural ideal of the physician as a tough, all-knowing, powerful authority figure. 

Teamwork and participatory decision making are promoted (pp. 10, 51) but this can be a major change for organizations that traditionally have strong silos and value adherence to established procedures and protocols. 

There are bromides sprinkled through the report.  For example, “Leadership, policy, culture, and incentives are aligned at all system levels to achieve quality aims and promote integrity, stewardship, and accountability.” (p. 25)  That sounds worthy but is a huge task to specify and implement.  Same with calling for a culture of continuous learning and improvement, or in the committee’s words a “Leadership-instilled culture of learning—is stewarded by leadership committed to a culture of teamwork, collaboration, and adaptability in support of continuous learning as a core aim” (p. 51)

Are the recommendations useful?

We hope so.  We are not behavioral scientists but the recommendations appear to represent sensible actions.  They may help and probably won’t hurt—unless a health care organization makes promises that it cannot or will not keep.  That said, the recommendations are pretty vanilla and the NASEM committee cannot be accused of going out on any limbs.

Bottom line: Clinician burnout undoubtedly has a negative impact on patient care and outcomes.  Anything that can reduce burnout will improve the performance of the health care system.  However, this report does not appreciate the totality of cultural change required to implement the modest recommendations.


*  National Academies of Sciences, Engineering, and Medicine, “Taking Action Against Clinician Burnout: A Systems Approach to Professional Well-Being,” (Washington, DC: The National Academies Press, 2019). 


**  “Burnout is a syndrome characterized by high emotional exhaustion, high depersonalization (i.e., cynicism), and a low sense of personal accomplishment from work.” (p. 1)  “Clinician burnout is associated with an increased risk of patient safety incidents . . .” (p. 2)

***  As an aside, the word “systems” is mentioned over 700 times in the report.

****  “Organizational culture is defined by the fundamental artifacts, values, beliefs, and assumptions held by employees of an organization (Schein, 1992). An organization’s culture is manifested in its actions (e.g., decisions, resource allocation) and relayed through organizational structure, focus, mission and value alignment, and leadership behaviors” (p. 99)  This is good but it should have been presented earlier in the report.

Tuesday, May 28, 2019

The Study of Organizational Culture: History, Assessment Methods, and Insights

We came across an academic journal article* that purports to describe the current state of research into organizational culture (OC).  It’s interesting because it includes a history of OC research and practice, and a critique of several methods used to assess it.  Following is a summary of the article and our perspective on it, focusing on any applicability to nuclear safety culture (NSC).

History

In the late 1970s scholars studying large organizations began to consider culture as one component of organizational identity.  In the same time frame, practicing managers also began to show an interest in culture.  A key driver of their interest was Japan’s economic ascendance and descriptions of Japanese management practices that depended heavily on cultural factors.  The notion of a linkage between culture and organizational performance inspired non-Japanese managers to seek out assistance in developing culture as a competitive advantage for their own companies.  Because of the sense of urgency, practical applications (usually developed and delivered by consultants) were more important than developing a consistent, unified theory of OC.  Practitioners got ahead of researchers and the academic world has yet to fully catch up.

Consultant models only needed a plausible, saleable relationship between culture and organizational performance.  In academic terms, this meant that a consultant’s model relating culture to performance only needed some degree of predictive validity.  Such models did not have to exhibit construct validity, i.e., some proof that they described, measured, or assessed a client organization’s actual underlying culture.  A second important selling point was the consultants’ emphasis on the singular role of the senior leaders (i.e., the paying clients) in molding a new high-performance culture.

Over time, the emphasis on practice over theory and the fragmented efforts of OC researchers led to some distracting issues, including the definition of OC itself, the culture vs. climate debate, and qualitative vs. quantitative models of OC. 

Culture assessment methods 


The authors provide a detailed comparison of four quantitative approaches for assessing OC: the Denison Organizational Culture Survey (used by more than 5,000 companies), the Competing Values Framework (used in more than 10,000 organizations), the Organizational Culture Inventory (more than 2,000,000 individual respondents), and the Organizational Culture Profile (OCP, developed by the authors and used in a “large number” of research studies).  We’ll spare you the gory details but unsurprisingly, the authors find shortcomings in all the approaches, even their own. 

Some of this criticism is sour grapes over the more popular methods.  However, the authors mix their criticism with acknowledgement of functional usefulness in their overall conclusion about the methods: because they lack a “clear definition of the underlying construct, it is difficult to know what is being measured even though the measure itself has been shown to be reliable and to be correlated with organizational outcomes.” (p. 15)

Building on their OCP, the authors argue that OC researchers should start with the Schein three-level model (basic assumptions and beliefs, norms and values, and cultural artifacts) and “focus on the norms that can act as a social control system in organizations.” (p. 16)  As controllers, norms can be descriptive (“people look to others for information about how to act and feel in a given situation”) or injunctive (how the group reacts when someone violates a descriptive norm).  Attributes of norms include content, consensus (how widely they are held), and intensity (how deeply they are held).

Our Perspective

So what are we to make of all this?  For starters, it’s important to recognize that some of the topics the academics are still quibbling over have already been settled in the NSC space.  The Schein model of culture is accepted world-wide.  Most folks now recognize that a safety survey, by itself, only reflects respondents’ perceptions at a specific point in time, i.e., it is a snapshot of safety climate.  And a competent safety culture assessment includes both qualitative and quantitative data: surveys, focus groups, interviews, observations, and review of artifacts such as documents.

However, we may still make mistakes.  Our mental models of safety culture may be incomplete or misassembled, e.g., we may see a direct connection between culture and some specific behavior when, in reality, there are intervening variables.  We must acknowledge that OC can be a multidimensional sub-system with complex internal relationships interacting with a complicated socio-technical system surrounded by a larger legal-political environment.  At the end of the day, we will probably still have some unknown unknowns.

Even if we follow the authors’ advice and focus on norms, it remains complicated.  For example, it’s fairly easy to envision that safety could be a widely agreed upon, but not intensely held, norm; that would define a weak safety culture.  But how about safety and production and cost norms in a context with an intensely held norm about maintaining good relations with and among long-serving coworkers?  That could make it more difficult to predict specific behaviors.  However, people might be more likely to align their behavior around the safety norm if there was general consensus across the other norms.  Even if safety is the first among equals, consensus on other norms is key to a stronger overall safety culture that is more likely to sanction deviant behavior.
 
The authors claim culture, as defined by Schein, is not well-investigated.  Most work has focused on correlating perceptions about norms, systems, policies, procedures, practices and behavior (one’s own and others’) to organizational effectiveness with a purpose of identifying areas for improvement initiatives that will lead to increased effectiveness.  The manager in the field may not care if diagnostic instruments measure actual culture, or even what culture he has or needs; he just wants to get the mission accomplished while avoiding the opprobrium of regulators, owners, bosses, lawmakers, activists and tweeters. If your primary focus is on increasing performance, then maybe you don’t need to know what’s under the hood. 

Bottom line: This is an academic paper with over 200 citations but is quite readable although it contains some pedantic terms you probably don’t hear every day, e.g., the ipsative approach to ranking culture attributes (ordinary people call this “forced choice”) and Q factor analysis.**  Some of the one-sentence descriptions of other OC research contain useful food for thought and informed our commentary in this write-up.  There is a decent dose of academic sniping in the deconstruction of commercially popular “culture” assessment methods.  However, if you or your organization are considering using one of those methods, you should be aware of what it does, and doesn’t, incorporate. 


*  J.A. Chatman and C.A. O’Reilly, “Paradigm lost: Reinvigorating the study of organizational culture,” Research in Organizational Behavior (2016).  Retrieved May 28, 2019.

**  “Normal factor analysis, called "R method," involves finding correlations between variables (say, height and age) across a sample of subjects. Q, on the other hand, looks for correlations between subjects across a sample of variables. Q factor analysis reduces the many individual viewpoints of the subjects down to a few "factors," which are claimed to represent shared ways of thinking.”  Wikipedia, “Q methodology.”   Retrieved May 28, 2019.

Friday, March 8, 2019

Decision Making, Values, and Culture Change

Typical New Yorker cover
In the nuclear industry, most decisions are at least arguably “hard,” i.e., decision makers can agree on the facts and identify areas where there is risk or uncertainty.  A recent New Yorker article* on making an indisputably “soft” decision got us wondering if the methods and philosophy described in the article might provide some insight into qualitative personal decisions in the nuclear space.

Author Joshua Rothman’s interest in decision making was piqued by the impending birth of his first child.  When exactly did he decide that he wanted children (after not wanting them) and then participate with his wife to make it happen?  As he says, “If I made a decision, it wasn’t a very decisive one.”  Thus began his research into decision making methods and philosophy.

Rothman opens with a quick review of several decision making techniques.  He describes Benjamin Franklin’s “prudential algebra,” Charles Darwin’s lists of pros and cons, Leo Tolstoy’s expositions in War and Peace (where it appears the biggest decisions basically make themselves), and modern decision science processes that develop decisions through iterative activities performed by groups, scenario planning and war games. 

Eventually the author gets to decision theory, which holds that sound decisions flow from values.  Decision makers ask what they value and then seek to maximize it.  But what if “we’re unsure what we care about, or when we anticipate that what we care about might shift”?  What if we opt to change our values? 

The focus on values leads to philosophy.  Rothman draws heavily on the work of Agnes Callard, a philosopher at the University of Chicago, who believes that life-altering decisions are not made suddenly but through a more gradual process: “Old Person aspires to become New Person.”  Callard emphasizes that aspiration is different from ambition.  Ambitious people know exactly why they’re doing something, e.g., taking a class to get a good grade or modeling different behavior to satisfy regulatory scrutiny.  Aspirants, on the other hand, have a harder time because they have a less clear sense of their current activities’ value and can only hope their future selves can understand and appreciate it.  “To aspire, Callard writes, is to judge one’s present-day self by the standards of a future self who doesn’t yet exist.”

Our Perspective

We can consider the change of an organization’s culture as the integration over time of the changes in all its members’ behaviors and values.  We know that values underlie culture and significant cultural change requires shifting the actual (as opposed to the espoused) values of the organization.  This is not easy.  The organization’s more ambitious members will find it easier to get with the program; they know change is essential and are willing to adapt to keep their jobs or improve their standing.  The merely aspiring will have a harder time.  Because they lack a clear picture of the future organizational culture, they may be troubled by unexplored options, i.e., some different path or future that might be equally good or even better.  They may learn that no matter how deeply they study the experience of others, they still don’t really know what they’re getting into.  They don’t understand what the change experience will be like and how it will affect them.  They may be frustrated to discover that modeling desired new behaviors does not help because they still feel like the same people in the old culture.  Since personal change is not instantaneous, they may even get stuck somewhere between the old culture and the new culture.

Bottom line: Cultural change is harder for some people than others.  This article is an easy read that offers an introduction to the personal dynamics associated with changing one’s outlook or values.

*  J. Rothman, “The Art of Decision-Making,” The New Yorker (Jan. 21, 2019).  Retrieved March 1, 2019.