By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email
No need to pay just yet!
About this sample
About this sample
Words: 1820 |
Pages: 4|
10 min read
Published: Jun 20, 2019
Words: 1820|Pages: 4|10 min read
Published: Jun 20, 2019
Across the nation institutions of higher learning are pinned against one another for scarce state funding. Colleges and universities that are subject to performance based funding (PBF) models have found mixed results and some are skeptical of its long-term viability. This paper seeks to present the origins and highlights some findings of PBF models.
With an initial appearance in Tennessee’s Higher Education Commission, early forms of performance-based funding (PBF) began to take flight in 1978 and by mid-2000s reaching at least 26 states to have practiced such policies (Harnisch, 2011). Compared to current model measurements of PBF; planning and design, indicators of measure, and amount of funds being awarded highlight some of the differences compared to their historic counterparts (Thornton & Friedel, 2016). However, original methods of PBF placed greater emphasis on completion and transfer rates, leaving galvanized advancement and preservation initiatives neglected (Dougherty & Natow, 2010). Original studies of PBF were also found to be more directly related to overall program effectiveness rather than student outcomes or institutional performance (Dougherty & Reddy, 2011). Furthermore, original plans of PBF were devised and carried out without support of educational leaders, resulting in a disconnect between state policy and educational institutional missions (Thornton & Friedel, 2016). Due to the failure in incentivizing educational institutions to change, original PBF methods were allowed to lapse (Miao & Ju, 2012).
In Florida, universities are scored from a range of ten categories which include: fist-time-in-college (FTIC) retention rates with a GPA above 2.0, FTIC 6-year graduation rates, AA transfer 4-year graduation rate, percentage of Bachelor’s degrees with excessive hours, percentage of bachelor graduates employed full-time or in higher education, Bachelor degrees in the STEM (Science, Technology, Engineering, and Mathematics) fields, Graduate degrees in the STEM fields, average cost per Bachelors degree, median wages for Bachelor graduates, and number of Bachelor degrees awarded to minorities (Frank, 2016). For FY2018-2019, the Florida Board of Governors allocated $560 Million in total PBF funds across 11 institutions (Florida Board of Governors, 2018).
The move of tying educational performance and standardized metrics has presented arguments from educators, students, and lobbyist. Those in opposition claim that performance based funding can lead to the reduction of access and diversity of education to the public, general performance based funding efforts for education are not indicative of higher educational performance, and outcome-based funding will result in the vocationalization of education (Kowal, n.d.). Nicholas Hillman, an assistant professor of education at the University of Wisconsin at Madison who has studied such state-based formulas, argues that performance-based funding is rarely effective. Claiming that “performance-based funding regimes are most likely to work in noncomplex situations where performance is easily measured, tasks are simple and routine, goals are unambiguous, employees have direct control over the production process, and there are not multiple people involved in producing the outcome” (Inside Higher Ed, 2016).
Findings from the Center on Budget and Policy Priorities found that, in the aftermath of the 2008 recession, some states closed some $425 billion in budget shortfalls (Schoen, 2015). Many states felt these cuts on education subsidies as the Center reported sates cutting spending to $2,353 per student – a 28% decrease from 2008 (Schoen, 2015). Arizona and New Hampshire saw a 50% cut on student spending; eleven states cut back 33% (Schoen, 2015). These cutbacks further exacerbated the mechanisms in place for universities to receive funding, creating greater emphasis on rigid performance measures. But as the economy continues to improve thus increasing tax revenues, is funding based on performance hampering the education students are receiving?
John W. Schoen, economics reporter for CNBC, stated that on average students are not receiving much of a difference in their education compared to 15 years ago (Schoen, 2015). Some universities have actually increased spending on student amenities such as gyms, recreational centers and dormitory upgrades to attract students in the educational market. Regardless of school spending, the reality remains that subsidies have been cut on the public side, leaving students and families to pay a larger portion of the expense, and private schools further adding to the wealth gap. According to The College Board, over the last ten years the average tuition rate has increased approximately 5%, 3.7% for private colleges and 2.9% for public colleges between 2014-2015; a substantially higher rate than general inflation and household incomes (Saving for College, 2016).
Recent state formulas for PBF have had little research on their effects and outcomes as their use is still relatively new (Dougherty J. K., 2013). Their main focus is claimed to be on student outcomes. However, there has been a disregard to institutional impacts (Dougherty J. K., 2013). In a qualitative study by Iowa State University, Zoe Mercedes Thornton and Nahra Fridel provide a synopsis of organizational impact for PBF on four small rural community colleges. Their results indicated that performance-based policies had affected the colleges’ operations, improvement efforts, and the perception of the school (Thornton & Friedel, 2016). In a 2013 study looking into 25 states and spanning across 20 years, it was found that PBF had an overall low effect on degree completion (Tandberg & Hillman, 2013). From this study it was also discovered that it had taken five years from the moment of implementation, for the disparity in graduation rates to be seen (Thornton & Friedel, 2016). When considering states with community colleges, it was seen that nine of 18 states saw little to no increase in degree completion while only four of the 18 appeared to have had statistically significant increases (Thornton & Friedel, 2016).
Associate professor of higher education at Teachers College Columbia University, Kevin J. Dougherty and researcher Vikash Reddy examined 60 studies relating to PBF measures and their impacts on educational institutions. They found that although PBF did present institutional effects such as funding changes, scrutiny in data planning, and programmatic as well as service changes, there was no firm evidence to suggest that PBF significantly increased graduation rates, remedial completion rates, nor retention rates (Dougherty & Reddy, 2011). In a Texas news report, Claire Cardona writes about the disadvantages of PBF, highlighting that any one formula cannot measure everything done by a college (Cardona, 2013). Her article highlights the fear of Texas University President Ray Keck, which believes that a significant portion of institutional efforts are not considered and may even be ignored under PBF models of measurement (Cardona, 2013). Policy analyst Thomas L. Harnisch indicates that as institutions begin to shape their goals based on performance funding, there is a risk of detriment to access, equity, institutional mission, and stability (Harnisch, 2011).
A study by Ben Jongbloed and Hans Vossensteyn of the University of Twente in The Netherlands, compared international governmental policies for PBF between Australia, Belgium (Flanders), Denmark, France, Germany, Japan, The Netherlands, New Zealand, Sweden, United Kingdom, and the United States. They described the mechanisms used for university funding and the extent to which the grants to universities are oriented on performance (Jongbloed & Vossensteyn, 2001). Their study found that most countries were output oriented in regard to research rather than to teaching, particularly as a result of research council funding (Jongbloed & Vossensteyn, 2001).
When examining the United States, their study indicated that the main reason for using performance indicators was as a measure of accountability (Jongbloed & Vossensteyn, 2001). Although there is incentive funds allocated in higher education budgets, these incentives were found to be relatively small (Jongbloed & Vossensteyn, 2001). When comparing funding of teaching measured by performance, it was found that only three countries followed this practice; Australia, Germany, and New Zealand (Jongbloed & Vossensteyn, 2001). As it related to funding for teaching, the article indicated that governments are still reluctant to link resources with enrollment-based measures (Jongbloed & Vossensteyn, 2001). Reasoning for this could be attributed to the belief that performance is to be understood in terms of increasing diversity and responsiveness to the needs of students (Jongbloed & Vossensteyn, 2001). The study also indicated that if university grants were to be tied to enrollment numbers, institutions would be more likely to follow their customer base and forgo academic excellence in pursue of customer incentives (Jongbloed & Vossensteyn, 2001).
Jung Cheol Shin, published a study in which the changes in institutional performance were measured as a result of adopting new accountability measures based on PBF. The measurements were based on representative education graduation rates and levels of federal research funding (Shin, 2010). Based on ten years of collected data, the study indicated that states which adopted performance-based accountability did not see a noticeable increase in institutional performance (Shin, 2010). The study further indicated that although PBF initiatives did not result in higher institutional performance, states may, in fact fail to fully put in place components relating to the reforms (Shin, 2010). It was also found that new initiatives may not have included the support needed to bring about targeted changes within the institution and performance, resulting in faculty not improving their teaching and research performance (Shin, 2010). Factors outside the range of policy were also found to account for changes in institutional performance and thus called to question the validity of policy measures (Shin, 2010). The study indicated, that when there is not enough institutional flexibility, policy involvement is likely to be weakened and thus not wield the proposed targets (Shin, 2010). As a result, the study called for the combination of performance-based accountability measures alongside with well grounded institutional practices (Shin, 2010).
Through the history of empirical performance measurement, performance based funding (PBF) is not a novel idea. In its early attempts, PBF was dropped due to the failure in incentivizing educational institutions to change (Miao & Ju, 2012). Current research highlights how PBF has not been able to significantly impact degree completion rates (Tandberg & Hillman, 2013), significantly increase graduation rates, remedial completion rates, retention rates (Dougherty & Reddy, 2011), nor noticeably increase institutional performance (Shin, 2010). It was also found, that when there is not enough institutional flexibility, policy involvement is likely to be weakened and thus not wield the proposed targets (Shin, 2010).
While the idea of rewarding the best performing schools may appear logical and innately programmable to a general populous, ponder on the notion of how are institutions or any person, to turn out surpassing results year after year, if the previous year required substantial cut backs? Would their competition not be perpetually ahead as they have been gifted substantial financial benefits? Though it is complex for a math based formula, dictated by political and business interest, to adequately represent the will of societies, educational institutions, and educators, modification to policy measures can lead to sound growth for educational institutions across all spectrums of academia. This can be achieved by an increase in local discourse when deciding measurement scales, comparative annual percentages as opposed to lateral competition, and cross training and cross-structural incentives for institutions. Policy makers must then ask themselves if educational policy is based on a utilitarian model to produce an intelligent and mentally capable populous.
Browse our vast selection of original essay samples, each expertly formatted and styled