Statistical Analysis And Class Actions: Part 3

Jun 9, 2015

This is the third of three pieces discussing statistical analysis of class certification topics in wage-and-hour class and collective actions. In parts one and two, I reviewed problems associated with relying on overall averages in addressing class certification and discussed rigorous analysis and statistical testing of whether common patterns are present in a proposed class. In part three, below, I discuss the use of sampling in analysis of class certification.

Statistical Sampling and Assessments of Commonality in Wage-and-Hour Class and Collective Actions

Dating at least to the U.S. Supreme Court's rulings in Wal-Mart Stores Inc. v. Dukes[1] and Comcast Corp. v. Behrend[2] as well as the Seventh Circuit's ruling in Espenscheid v. DirectSat USA,[3] a clear trend has emerged within class and collective action litigation whereby greater scrutiny is required at earlier stages of the process.

Dukes and Comcast underscored the importance of rigorous analysis of commonality at the class certification stage under Rule 23, while Espenscheid highlighted the growing congruence between the commonality assessment required under Rule 23 and the “similarly situated” analysis under the Fair Labor Standards Act.

The California Supreme Court’s affirmation of the court of appeal’s decision to decertify the class in Duran vs. U.S. Bank[4] fits within this wider trend. The proposed class of allegedly misclassified loan officers was initially certified by the trial court, and plaintiffs developed a sampling methodology to demonstrate liability. The court of appeal subsequently decertified the class due to problems with the sample and other concerns. The state supreme court affirmed the court of appeal with an admonition that the class should have been decertified earlier in the process, and emphasized that at the class certification stage plaintiffs must present a viable methodology to manage the case should a class be certified.[5]

For those contemplating the use of samples at a later stage of a proceeding, this trend toward greater scrutiny earlier in the process underscores the need for a thoughtful and careful approach to the sample design, one that can withstand scrutiny at the class certification stage. Among other things, care should be taken to account for possible sources of variability in the population by stratifying and over sampling different subparts of the proposed class. Properly executed, such an approach would improve precision in the resulting estimates (i.e., minimize the margin of error in the estimate). Moreover, such an approach enables rigorous analysis of commonality/predominance questions.

Sample Design in Wage-and-Hour Class and Collective Actions

In my experience, in wage-and-hour class and collective actions, I often see sampling efforts intended for use in answering a variety of questions relating to off-the-clock claims, misclassification and other topics.[6] Sample projects may be intended for specific use at the class certification stage or, if a class has already been certified, may address liability and damages. In either scenario the results of a good sample should provide accurate and reasonably precise estimates on the outcome studied and inform as to whether individualized issues predominate over common class issues. However, what I typically see are single, “simple random sample” designs for the proposed class population which fall well short in both dimensions.

Simple random samples assume the presence of a single reasonably homogenous distribution with relatively little variation around an average outcome. When substantial variation exists in the population studied, a simple random sample, particularly in conjunction with too small a sample size, will lead to estimates measured with too much error. This was one of the (many) problems with the Duran sample effort. The small sample in that case evidenced substantial variation in reported outcomes and the associated margins of error were excessive.

As critical, simple random samples are simply inadequate if the objective is to rigorously assess commonality. Use of simple random samples implies the presence of a single reasonably homogenous distribution with relatively little variation. This takes as a given that which is in dispute and should be tested in class certification analyses. If a purpose of the sample project is to rigorously assess whether a policy or practice operates in consistent manner across a proposed class, then the sample must be designed explicitly to accommodate that goal.

For a sample to be useful in the assessment of class certification topics it must allow the analyst to test whether differences exist within subparts of the proposed common class. A subpart of the class might be defined in a number of ways, including organizational characteristics such as division, location or job or individual characteristics such as an employee’s years of experience.

Regardless of the subpart at issue, the key is that sufficient data be gathered from each of them. Sample designs which can accomplish this may be referred to variously as “stratified” samples, “over” samples or simply “separate” samples.[7] These descriptions may refer to different methods depending on the sample project but in this context the critical element is that the analyzed sample must be sufficiently large in each of the relevant subparts of the proposed class as to enable measurement of precise estimates within each subpart and testing for differences between subparts.

Sampling in an Example Misclassification Allegation

Sampling efforts may be undertaken in a wide variety of wage-and-hour cases. For purposes of example, consider misclassification and associated overtime claims alleged on behalf of all assistant managers working for ACME All Service, a company with three distinct divisions with respective responsibilities for production, delivery and retail sales of ACME All Service’s products. The complaint alleges all assistant managers companywide are misclassified since they are primarily engaged in duties that are nonexempt, and that they are entitled to back pay for hours worked in excess of 40 per week. As the assistant managers are classified as exempt there is no existing data to directly answer the claims relating to duties or hours worked so a sample based approach is contemplated for purposes of answering these questions.

In the (unfortunately) typical approach the sample would be drawn from the entire proposed class using a single, simple random framework. Questions could be asked relating to the share of a typical workweek an employee spent engaged in different tasks. These could be more managerial tasks, such as hiring and supervising hourly employees, or nonexempt tasks, such as working on a production line, doing deliveries or cashiering at a retail store. Questions regarding the hours an employee worked over a typical workweek would be asked as well. The results of this approach would allow for one set of answers relating to duties and hours worked, on average, across the entire proposed class. Even assuming a sufficient sample size and relatively good precision in the resulting estimates, this approach only allows for one set of answers.

In an alternative and relatively superior approach, the sampling effort could constitute three separate samples for the three different divisions. Each of these samples would stand on its own with a sample size large enough for calculation of reasonably precise estimates of time devoted to certain duties, or hours worked, for each individual division. Consideration of the need to assess differences between divisions may require additional over sampling in one or more divisions. With three separate sets of answers, the results of this approach allow for rigorous examination of whether the answers are statistically similar or different between the production, delivery or retail sales divisions. If the assistant managers really are similar enough across the divisions their reported share of time on exempt or nonexempt duties should be statistically indistinguishable based on where they work.

It is worth noting that if a class were certified and the assistant managers were found to be misclassified, then the more rigorous approach to executing the sample would also enable more accurate calculation of damages. In the single simple random sample approach the data would not accommodate separate calculations for the different divisions. By gathering sufficient data for all three divisions the possible damages can be estimated specific to the division. There may well be additional individualized concerns within a division, but at a minimum the approach would ensure damages for an employee in one division are not unduly influenced by employees in another division.[8]

Implications for Use of Samples in Rigorous Analysis

Recent case history demonstrates a trend toward increased scrutiny at earlier stages of class action litigation. If sampling is contemplated in wage-and-hour class actions, care should be taken to ensure the sample design and execution allow for reasonably precise estimates and rigorous assessment of whether common class treatment is appropriate.

[1] 131 S. Ct. 2541(2011).

[2] 133 S. Ct. 1426 (2013).

[3] 705 F.3d 770 (7th Cir. 2013).

[4] Duran v. U.S. Bank National Association No. S200923 (May 29, 2014).

[5] Another focus of the California Supreme Court’s decision was on the flawed sample effort underlying the liability and damages analyses in Duran. Foremost among the problems there was that the sample ultimately analyzed was not randomly drawn and was subject to bias. Any results based in this sample could not be representative and therefore could not be generalized to the rest of the class. In addition, even if the analyzed sample had been drawn at random, given what turned out to be wide variability between the sampled class members in their reported outcomes, the sample size was too small to provide reasonably precise estimates.

[6] Note that the use of sampling may be unnecessary or inappropriate in many cases. To the extent sampling approaches are contemplated, a long list of factors need to be carefully considered in the design and execution of the sample project.

[7] Stratified samples are designed to sample from separate parts of a population and may or may not include sufficient data to perform tests of commonality hypotheses. Over sampling refers to sampling extra observations for a particular subgroup to ensure more precise estimates are calculable for the subgroup. “Separate” samples refer to treating each relevant subpart of a larger population as its own population to be sampled from. Separate samples would be drawn as part of an overall strategy, but rely on separate assumptions regarding the parameters of each individual subpart.

[8] In this example, the separate samples approach is organized around the division as the key subunit of interest. If assessment of possible variation within division was of interest (between jobs, locations, departments, etc., in a given division) a sampling strategy could be designed around the possibility instead.

This was originally published in Law360 on May 29, 2015. 

Experts

Practice Areas

This website uses cookies to improve functionality and performance. By continuing to use this website, you agree to the use of cookies in accordance with our Privacy Policy. Ok