METHODOLOGY AND SCORING

 

Overview of methodology

 

The policies we examine here by no means represent the entirety of equity policies. They serve as  a collection of some of the bolder ideas that show some promise of making a dent in meeting our collective challenges. Each policy has been subjected to scoring on a qualitative scoring matrix. Below is a description of each of the three scoring dimensions.

 

EASE OF IMPLEMENTATION

 

The Ease of Implementation dimension is scored along three subcomponents:

 

·      Political Feasibility

o   The extent to which a  policy is acceptable to relevant decision makers and stakeholders. This component is scored as having Low, Medium, or High difficulty.

·      The Need for New Structures/Organizations, or Major Systemic Reforms

o   This component is akin to the technical or administrative criteria found in public policy analyses and is scored along a binary classification - Yes or No.

·      Hierarchical Complexity (can the policy operate at multiple societal levels)                

o   The greater the number of levels the policy can be implemented, the greater the potential ease of implementation. This dimension has binary scoring – Single or Multi-level implementation.

 

Whereas cost is an important consideration regarding ease of implementation, the question of cost is assessed separately in this framework. This framework situates cost within a broader conversation that considers societal benefits as well as financial expenditures. In part, this framework seeks to situate each of these policies within a more comprehensive conversation that weighs a range of factors and a fuller consideration of a policy’s value.

 

Note: in either of the sub-components, the scores are not linear and not reflective of an exact degree of distance or value between the different components. Scores are only meant to provide a heuristic for comparison of a phenomena relative to other situations (e.g. from desirable to less desirable).

 

 

DIMENSIONAL CRITERIA

SCORE

Political Feasibility

LOW

5

MED

10

HIGH

25

Need for New Structures 

 

YES

10

NO

25

Hierarchical Complexity

 

 

Multi Level

 

10

Single Level

 

25

 

 

 

 

 

 

 

 

IMPACT AND RETURN ON INVESTMENT

 

A policy’s “return on investment” rating reflects an assessment of the potential for the policy to achieve benefits for individuals, families, and communities  being fiscally sound relative to the policy’s financial costs.  Herein we are consciously using elements from the two concepts -- return on investment and cost benefit --- synergistically owing to the reality that public policy decision making reflects both elements, and the tendency in the general public to use the two concepts interchangeably. The matrix has three components:

 

1) Revenue impacts,

2) Individual/family outcomes, and

3) Social impacts.

 

Revenue Impacts

Revenue Loss

Pays for Itself (breaks even)

Return Greater than Costs

 

 

 

 

 

Individual/Family Impacts

Positive

Negative or No Effect

1.         Income, Wealth, Employment

 

 

2.         Educational Outcomes

 

 

3.         Housing Outcomes

 

 

4.         Criminal Justice Involvement

 

 

5.         Use of Social Services

 

 

6.         Health Outcomes

 

 

7.         Social Capital

 

 

Social Impacts

 

 

1.         Civic Participation

 

 

2.         Social Cohesion

 

 

3.         Community Health

 

 

4.         Economic Development

 

 

5.         Community Development

 

 

6.         Community/Environment Safety

 

 

 

 

 

 

Conceptually, the components are interrelated and feed one another. The table is divided between revenue and family/social impacts for the sake of simplifying the relationship between the two and communicating how each policy is scored. 

 

LOW SCORING

 

Policies scoring low on the matrix are characterized by projected revenue loss and little evidence of significant positive impacts across the two subcomponents – Individual/Family impacts or Social Impacts.

 

 

MEDIUM SCORING

 

Policies can score medium if they meet two (2) potential scoring outcomes:

·      Revenue loss and convincing/strong evidence that the policy has or can produce positive family or social outcomes

·      Break even with respect to revenue, and evidence that the policy has or can produce positive family or social outcomes 

 

HIGH SCORING

 

Policies scoring high on this subcomponent demonstrate a high return and strong evidence that they can produce positive outcomes.

 

As with every dimension, assessments reflect a high degree of subjective decision making. In this case, the subjectivity regarding the extant literature is in an effort to assess strength of available evidence, and when the research is lacking, projections are based on evidence from other related literatures.

 

Note the family/social outcomes classification is not summative. A policy can receive a rating based on performance in just one indicator given that many policies target a very narrow goal. For example, increasing educational attainment, though narrow, has an outsized impact on a range of related individual and social outcomes.

 

 

RESEARCH BASIS

 

Assessing the overall value and quality of the research literature has inherent difficulties with reconciling a number of issues, including making sense of different research practices from discipline to discipline, valuing competing methodologies, and integrating insights from very different disciplines using different concepts and measurements. While there is a significant amount of scholarship on assessing the value of individual pieces of research, there is a lack of scholarship on how to assess an entire field. As such, we are left to apply those individual level assessments and determine if there is sufficient justification to rule that a group of studies collectively meet those same standards. That is, on the whole can a group of studies be said to have enough rigor.  there are certainly very influential studies in any given arena, for the purposes of these analyses, the scoring is based on collections of scholarship. When there is a lack of scholarship in a given area or a related topical area, in some cases, a limited number of studies may be used to make assessments. With that in mind, the rubric for this dimension relies on the 2016 work of Martensson et al on evaluating research practice and quality.[1] We borrow two of their primary evaluative criteria – whether research has Credibility and if it is Contributory. For our purposes, we use the terms Soundness and Credibility. The latter [soundness] assesses whether the research is relevant, applicable, and generalizable. The former [Credibility], is defined based on the rigor, reliability, validity, consistency, and coherence of the research.

 

Is the Research Literature Sound?

Yes

No

Is the Research Literature Credible?

Yes

No

 

Scoring is based on the following criteria:

 

STRONG = Sound and Credible

 

In this category, we find research that reflects all of the above criteria : relevance, applicability, generalizability, rigorous, reliable, valid, and consistent.

 

MODERATE = Either Sound or Credible but not both

 

If there are questions about the overall credibility or soundness of a collection of studies, then it receives a rating of No. There are always questions to be raised about a given literature, and any literature is ideally always improving itself and moving towards more definitive answers and higher rigor. But, on the whole, the assessment is made on where the literature stands at present, and whether there is enough evidence to rate it accordingly.

 

WEAK = Having serious limitations in both Soundness and Credibility

 

This category is represented by those policies for which there exists little if any substantive research, research from any other programmatic field that can be reasonably applied, or a field that is supported only by anecdotal or discredited studies.

 


[1]  Mårtensson, Pär & Fors, Uno & Wallin, Sven-Bertil & Zander, Udo & Nilsson, Gunnar H, 2016. "Evaluating research: A multidisciplinary approach to assessing research practice and quality," Research Policy, Elsevier, vol. 45(3), pages 593-603.