The National Longitudinal Survey of Schools: Implementation of Standards-Based Reform and Title I Supports for Reform EA 97 0100 I.Background For over 30 years, the Title I program has served students at risk of school failure who live in low-income communities. It has provided extra resources for school systems to help these students catch up to their more advantaged peers. The program has focused the attention of policymakers and educators on the needs of poor and educationally disadvantaged children and has helped equalize educational opportunities. Yet, the National Assessment of Chapter 1, completed in 1993, found that the achievement gap separating students attending high- and low- poverty schools was widening, and increasing as students moved through the grades. The Assessment also found that the program, which usually operated in isolation from the regular school program and from State and local education reforms, was not strong enough to reduce the achievement gap. In 1994, Congress passed two landmark pieces of education reform legislation. The Improving America's Schools Act (which reauthorized the Elementary and Secondary Education Act (ESEA)) and the Goals 2000: Educate America Act passed earlier that year, are designed to change the way schools approach teaching and learning. These laws call for holding all students to high standards of performance in core subject areas. States are to set or adopt content and performance standards for all students and design assessment systems aligned with the standards. The new Title I operates in this context of challenging standards. The intent is to move schools away from a system in which some students, and Title I students in particular, were held to lower standards which meant less challenging curriculum, and resulting low achievement. Title I is no longer designed to operate as an isolated supplement to the regular education program, but rather as an integral support for reforms. The Title I program supports standards-driven reforms through a variety of provisions that give more decision-making authority to schools while holding them accountable for student outcomes. The program has shifted from a focus on remediation to the expectation that students will achieve to high academic standards. Title I's objectives are to: provide children with an enriched and accelerated educational program; promote schoolwide reform, effective instructional strategies, and challenging content; upgrade curriculum and instruction; coordinate with other education, health, and social service programs; provide parents meaningful opportunities to participate in the education of their children at home and at school; and to distribute resources where the needs are greatest. More schools are now eligible to become schoolwide programs. Any school whose enrollment of children in poverty is at least 50 percent rather than the previous 75 percent, is eligible to become a schoolwide program. All schools are also required to develop school-parent compacts which enumerate the shared responsibilities of the school and the parents to help all children reach challenging academic performance. Districts are required to develop school performance profiles, one means of holding schools accountable for the performance of their students. Title I also requires that States develop annual targets for adequate progress of schools in having all students reach challenging academic standards. The entire assessment system has changed, with a new emphasis on ongoing monitoring of progress and changing course as needed, to meet performance targets. The Planning and Evaluation Service (PES) in the U.S. Department of Education (ED) is currently carrying out a number of studies designed to collect information on the implementation of the new Title I provisions and progress in setting new standards and developing assessment systems. These studies are mandated as a part of the National Assessment of Title I (NATI), in the reauthorized ESEA. The law requires, in section 1501, the following: "The Secretary shall conduct a national assessment of programs assisted under this title...The assessment shall examine how well schools, local educational agencies, and States are progressing toward the goal of all children reaching the State's challenging State content standards and challenging State student performance standards..." The Government Performance and Results Act of 1993 (GPRA), P.L. 103-62, also requires that ED establish annual performance plans and reports for all Department programs. The reports must describe performance indicators, actual results, and summary evaluation findings. This study, along with the other NATI studies, will provide the necessary information for describing progress in meeting the performance indicators for the Title I program. The NATI studies examine awareness and implementation of content and performance standards, the new Title I provisions (such as the lower poverty threshold and new comprehensive reform plan required for schoolwide programs, the emphasis on extending learning time, the provision of an enriched and accelerated curriculum, and professional development linked to helping students reach the new performance standards), and new assessment systems at the State, local, and school level. Baseline, early implementation, and full implementation are being tracked over several years in a formative manner so that data can be provided to States, school districts, and schools as they continue to improve their reform efforts and their use of Federal programs to support these changes. This study will collect national information on school-level implementation of standards-based reforms and the new Title I provisions supporting such reforms over two school years. The longitudinal survey will be closely linked to the other implementation studies, through imbedded sampling and replication of selected survey items. This study will also be linked to a national evaluation of the Eisenhower professional development program, to ensure that comparable data is gathered regarding professional development provided and needed by school staff. The longitudinal survey relates most closely to the Longitudinal Evaluation of School Change and Performance (LESCP). The areas being examined in the LESCP in a purposive sample of schools are the same ones this study will explore in a nationally representative sample of schools. The LESCP evaluation questions will focus on: the relationship of school reform to student performance; awareness of school change; curriculum content; quality of teaching; articulation across grades and curriculum; at-risk populations; professional development; parental support and community involvement; learning environment; and external supports and assistance. This study will provide greater breadth across most Title I implementation questions, since it will be looking at a greater variety of schools and different school levels (both elementary and secondary), with slightly less depth on assessment/outcomes, since the LESCP will be administering a standardized test as part of the study and will be analyzing assessment data in a more comprehensive fashion than this study. II.Study Purposes The purposes of the national longitudinal survey of schools are to examine and describe how schools are using standards-based reforms to assist in improving learning, with a particular focus on implementation of the new provisions in the Title I program that are designed to support such improvements. The study will be closely linked to the NATI district surveys so that implementation can be traced through the different levels of the education system. The study will look at the extent to which schools use assessment results to change classroom practice and to set both short and long-term improvement targets. This will be the only large-scale study to provide nationally representative information on Title I operations at the school level. It will also be the only study able to provide information on schools serving significant proportions of migrant, limited-English proficient (LEP), and Native American students, and schools that have been identified for school improvement under Title I. In addition, this study will provide in- depth information on a small sample of schools through conducting case studies, with observations at the classroom level. The study will examine a variety of implementation issues, as discussed below. The primary data collection method to be used to collect information on each issue/topic is noted in italics in parentheses. Awareness and Understanding of Standards . To what extent do principals and teachers know what their State's or district's content and performance standards are? In which subjects have standards been developed? Does the school have additional standards? How are standards used in the school? (SURVEY) . What are their perceptions with respect to the standards? To what extent do school staff believe the standards will provide a framework for improving teaching and learning in their schools? Do they think the standards are sufficiently rigorous? To what extent do they believe their students will be able to meet the standards? (SURVEY) . To what extent do school staff understand new Title I provisions and their obligations and opportunities under these provisions? To what extent do they believe the provisions will contribute to educational success? (SURVEY) . To what extent are school staff aware of what they need to do to improve student achievement?(CASE STUDIES) . Do schools have the resources needed and the control over them to implement standards- based reforms? How are resources from Title I allocated within schools? (SURVEY) . Are there any differences in perceptions and awareness levels between staff in targeted assistance and those in schoolwide programs? Are there differences in perceptions and awareness levels between staff in elementary vs. secondary schools? (SURVEY) Standards-Driven Planning Goals (ENTIRE SECTION TO BE COLLECTED PRIMARILY THROUGH CASE STUDIES) . Do schools have a statement of purpose and vision that lay the foundation for planning and goal-setting? Are the purpose and vision in alignment and consistent with those at the district and State level? . Do schools use a strategic planning process to set goals for student learning? Does the process include collaboration between school staff and parents? To what extent are the goals tied to State content and performance standards? Does the school set short-term improvement targets to move student achievement upward from the current performance baseline? How does the State or district in which the school is located define adequate yearly progress and which grade levels and content areas are included? . How do schools set learning goals, both short- and long-term if their district or State does not yet have content and performance standards in place? How are transitional assessments used to monitor student progress? . Have schools established a baseline of student performance in reading and mathematics? Are schools responding to the Clinton Administration's goal of having all students read by the end of third grade, reach math targets by the end of eighth grade, and be prepared for work and/or college through adequate academic and skills training? Have they identified needs in particular subject areas and grades? Do teachers receive baseline and other assessment information for different groups of students? How does the school decide where it has opportunities to improve? What do schools use to benchmark against in setting academic goals? (SURVEY ALSO) Strategies . Have schools identified what training and other resources are needed to carry out these strategies? Where do schools shift resources to support improvement strategies? How does Title I fit into the picture in terms of its use as a resource for carrying out improvement strategies? Are resources shifted in different ways and are different improvement strategies used in schools serving migrant, LEP, and/or Native American students? How much control over these decisions does the school have, versus the district? (SURVEY) . What types of strategies are schools identified for improvement using? Are the strategies based on sound evidence of effectiveness? Do they reflect an approach that aligns content with teaching and professional development? Are schools using approaches that extend learning time? (CASE STUDIES) Monitoring Implementation Curriculum and Instruction . In targeted assistance schools, how are Title I services used to support improvement strategies? Have services been configured differently as a result of the school's planning and monitoring processes? How are aides used and how has this changed over the past year? How are services configured in schoolwide programs? (SURVEY) . How are schools promoting continuity of education for mobile migrant children? Have curricular materials changed and have the changes been consistent across States serving mobile migrant children? Is Title I (Part A and Part C) used to provide specialized services to middle and secondary school-age working migrant youth? Is Title I (Part A and C) used to support summer programs for migrant students?(SURVEY) . To what extent have Title I services to children with limited-English proficiency changed since reauthorization? . In schoolwide programs, how does the school ensure that the needs of all students who previously received separate Federally-supported services are met? If Title I funding were not available, what would the schoolwide program look like? What other Federal program funds are used to support the schoolwide program and are there differences in funding sources between elementary and secondary schools? How have schools changed their instructional program if they have just begun a schoolwide program and previously operated as a targeted assistance program?(CASE STUDIES) . Does the school offer after-school, summer, or tutoring programs to extend learning time? (SURVEY) . Has the number of secondary schools receiving Title I funds increased since reauthorization and by how much? What proportion of secondary schools operate schoolwide versus targeted assistance programs? (SURVEY) . How do schools coordinate with health, nutrition, and social service providers to meet the needs of children and their families? Are these areas supported in the school with Federal program money? (SURVEY) . How do schools work with early childhood programs such as Even Start and Head Start?(SURVEY) . Do teachers engage in joint planning for ensuring that the curriculum they teach is targeted on high academic performance standards? (CASE STUDIES) . Are teachers using new textbooks or other curricular material that align with new State standards? Does the curriculum taught embody the knowledge and skills outlined in emerging State standards or curriculum frameworks? Do the textbooks, technology, and other instructional resources reinforce and extend the curriculum? (CASE STUDIES) . Are instructional practices changing in response to the need to teach a more content-rich curriculum? What types of instructional strategies are currently being used? Does teaching effectively use and extend learning time? (CASE STUDIES) . Does teaching address the diverse needs of students, engage students and motivate them to attain high standards? (CASE STUDIES) . Has the work being done to develop content and performance standards served as a catalyst for change in the school? (CASE STUDIES) Professional Development (ENTIRE SECTION TO BE ADDRESSED PRIMARILY THROUGH CASE STUDIES) . In what areas has professional development been provided for teachers and other school staff? What are the links among professional development, school goals, State content and performance standards, and improved outcomes for students? Does the school have a professional development plan? . Who has provided the professional development and which providers have been most helpful? Is professional development valued by teachers as worthwhile for improving their instruction? . To what extent has the school worked with State-established school support teams? How helpful has the assistance been? . How much Title I money supports professional development? How is professional development coordinated across different Federal programs (Title I, Title II, Title VII) operating in schools? . Is professional development sustained to enable staff to put what they are learning into classroom practice, assess practice, and adjust practice accordingly? Parent and Community Involvement (ENTIRE SECTION TO BE ADDRESSED PRIMARILY THROUGH SURVEY) . Do schools have a process for developing parent-school compacts? Who participated in developing the compact? If a targeted assistance program, does the school use compacts for all students or just those served by Title I? What is included in the compact? How do schools use compacts over the course of the school year? How are compacts changed if they are determined to have little effect? . How are parents and the community kept informed on standards development and its impact on the curriculum and on classroom instructional practices? Do schools translate standards into terms families can understand (i.e., into the expected level of work required to obtain a good grade)? . What tools are parents given to help with their children's learning? Are any special procedures in place for school personnel to involve migrant parents in the education of their children? . How is the school's parent involvement policy developed? Is it specialized for the school or is it the district Title I policy? What areas are covered in the policy? . How are schools evaluating their efforts to involve parents and the larger community? Which strategies for parent involvement are most successful? . Are schools implementing family literacy programs? . How are schools reaching out to parents of limited-English proficient children? Are translations of key school informational documents made available to parents? Are school profiles prepared in other languages? Technology (SURVEY and CASE STUDIES) . How are schools using technology in the classroom? Do students have access to the Internet? Do mobile migrant students receive consistent access to computers? Are teachers provided with sufficient training to learn to use technology to enhance and enrich instruction? . To what extent are there barriers to schools' acquisition or use of advanced telecommunications capabilities? Reporting and Feedback for Improvement (ENTIRE SECTION TO BE ADDRESSED PRIMARILY THROUGH CASE STUDIES) . What strategies does the school use to monitor its progress on reaching goals? How do schools monitor the progress of different groups of students and of different classrooms within the school? How are district and State assessment data used? Do school staff have the training they need to handle and analyze data? . How are schools assessing the performance of limited-English proficient children? . If progress targets are not met, what does the school do to improve? Where do schools get their ideas for strategies? How are teachers involved in decisions regarding the improvement process? How well are these strategies supported by a research base? . Do schools receive, analyze and present information in a timely fashion to influence practice? Are results presented clearly to highlight areas of progress and areas in need of strengthening? . Do schools customize their profiles in a clear and nontechnical format, so that families and the community can understand their school results? Do principals and teachers perceive the school profile as a useful document? III.Approach to Evaluation The study issues specified above will be addressed through several different methods of data collection and analysis on a longitudinal nationally representative sample of Title I schools and additional samples of schools serving significant proportions of migrant, limited-English proficient, and Native American students, as well as a separate sample of schools identified for school improvement under Title I. First year case study data will be collected in the spring of 1998, followed by a collection in the spring of 1999. Survey data will be collected in the fall of 1998 and the fall of 1999. Case Study Design The focus of the case studies will be on classroom practices, using observational techniques as well as focus groups/interviews with school staff to discuss their perceptions regarding the implementation of reforms and the new Title I provisions, and the professional development they have received. The case study data collection will assess the extent to which there is a common understanding of standards and assessments designed to measure progress toward meeting performance standards. The case studies will also look at the extent to which schools are operating differently since reauthorization of the Title I program, using strategic planning, and engaging in a continuous improvement process as they help students achieve to high standards. Survey Design and Document Collection The fall 1998 and fall 1999 survey data collection shall be conducted through the Computer- Assisted Telephone Interview (CATI) system. Both school principals and teachers will be interviewed. Survey questions should build significantly from surveys previously used or currently in use in other PES-sponsored evaluations. These include the LESCP, the Fast Response Survey System (FRSS) principal and teacher surveys conducted in 1996, the Follow- Up Survey on Education Reform conducted this year, the State Implementation Study, the two District Surveys, the planned Resource Allocation Study, the Eisenhower study, and previous implementation studies such as the Chapter 1 Implementation Study. To the extent possible, this study should also build on items used in the Third International Mathematics and Science Study, the Schools and Staffing Survey, and the Early Childhood Longitudinal Study, all coordinated through the Department's Office of Educational Research and Improvement. Given the many surveys already in use or under development, it will not be necessary for the school study to develop an entirely new set of survey questions. Rather, it will be able to replicate many items and add items as needed to address any other significant issues. It is therefore critical throughout the survey development phase that the contractor meet regularly with the ED offices and contractors conducting the related studies. As part of the data collection, the contractor will also request that a subsample of the surveyed schools provide copies of school planning documents, including: schoolwide program plans, school performance profiles, assessment results, report card formats, school improvement plans, parent-school compacts, and targeted assistance planning documents. These documents will provide a more complete picture of school-level planning and operations. Analyses of these documents will build on analyses currently underway as part of the Follow-Up Survey on Education Reform and the LESCP. Sample Selection The nationally representative sample of Title I schools will need to link to other samples being conducted as a part of the National Assessment of Title I. The sample should be drawn from the districts sampled in the district survey, with schools added so that the sample is nationally representative. Schools will need to be stratified by schoolwide and targeted assistance status, and their school improvement status (under the Title I requirements, schools that have not meet their adequate progress targets for 2 years, must develop and implement a school improvement plan). Because this study is the only nationally representative study of schools, it is important that separate samples be drawn for schools with significant populations of migrant, limited-English proficient (LEP), and Native American students, since there will not be sufficient numbers of these schools in the sampling frame used for the representative sample. An additional sample shall be drawn of schools that have been identified for school improvement, to focus on how the lowest performing schools are undergoing their improvement efforts. Scope of Work Task 1Meetings with ED and Other Relevant Groups Subtask 1.1Meet with Contracting Officer's Technical Representative (COTR) and Other ED Staff The contractor shall meet with the COTR, the Contract Specialist, staff from the Title I program office, and other ED staff as appropriate in Washington, D.C. if the contractor is located in the Washington D.C. metropolitan area and via teleconference if the contractor is located outside of the Washington D.C. metropolitan area within 2 weeks after the effective date of the contract. The purpose of the meeting will be to discuss upcoming tasks, the survey analysis plan, study design issues, and scheduling of meetings with other contractors and COTRs to discuss in further depth the links between the studies. The contractor shall prepare a draft summary of the meeting, including a list of next steps, within one week after the meeting. After a one-week ED review, the contractor shall submit a revised, final summary to ED no later than 4 weeks after the effective date of the contract. Subtask 1.2Hold Quarterly Progress Meetings with ED The contractor shall provide progress updates at quarterly project status meetings with the COTR and other ED staff in Washington, D.C. The contractor shall discuss the status of all tasks, the achievement of project milestones, expenditure information, quality control information, any problems or concerns that might affect the project and any other topic of interest. In particular, the contractor should discuss any work that is behind schedule and plans to complete such work. The contractor shall submit a draft agenda and proposed handouts for ED's approval 2 weeks before each meeting is held. After a 3-day review by ED, the contractor shall revise the agenda and handouts as necessary and send it to attendees one week prior to the meeting. The meetings will be held at ED. The contractor shall submit to ED a meeting summary within one week after the meeting. The summary shall include a list of next steps. After a one-week ED review, the contractor shall submit revised, final minutes no later than 2 weeks after each meeting. Subtask 1.3Meet with Other Contractors The contractor shall meet with ED staff and other contractors conducting work for ED in order to coordinate studies. These meetings will be scheduled by ED approximately twice a year for one day and held in Washington, D.C. Task 2Establish Technical Work Group and Convene Meetings Subtask 2.1Establish Technical Work Group The contractor shall form a Technical Work Group of no less than 8 people and no more than 12 people to provide the contractor with outside expertise on the conduct of the study including refinements of the study design; data collection and instrumentation; analysis plans, and the quality, content, and format of study reports. The work group members shall be selected based on their expertise in one or more of the following areas: sampling and longitudinal survey methodology, large-scale study design, knowledge of Title I and related Federal programs, standards-based reform processes and the policy context for implementing education reforms, issues specific to the education of migrant, LEP, and Native American students. The contractor is free to accept or reject any advice or recommendations individual work group members offer. The contractor shall submit a final list of proposed work group members for approval by ED no later than 1 week after the effective date of the award. The list is to be based on names submitted to ED as part of the proposal and shall include representatives from the Independent Review Panel of the National Assessment of Title I. The list shall discuss the strengths of each potential advisor and explain the role each will play in helping achieve the objectives of the evaluation. After a 1-week ED approval, the contractor shall contact each member and formally invite him or her to serve on the work group within 3 weeks of the contract's effective date. The contractor shall finalize the group membership no later than 4 weeks after the effective date of the contract. Subtask 2.2Convene Technical Work Group The contractor shall convene the first meeting of the technical work group within 8 weeks after the effective date of the contract. The purpose of the meeting will be to discuss plans for the case studies, the data collection instruments and analysis plan. The contractor and COTR shall jointly decide on the timing and purposes of the subsequent meetings after the first meeting. During the course of the contract, the contractor shall convene the work group for approximately 6 meetings of one to one and one-half days each. ED staff will attend and participate as appropriate in these meetings. The contractor shall convene all meetings in the Washington, D.C. metropolitan area. The contractor shall develop a schedule for succeeding meetings after the first work group meetings. Three weeks prior to each meeting, the contractor shall submit a draft agenda for review by ED. After a three-day review by ED, the contractor shall revise the agenda as required and include it in the briefing materials described below. The contractor shall prepare briefing materials to be sent to the work group one week prior to each meeting. The contractor shall include at least the following in the briefing materials: the agenda, status reports, background information on issues to be discussed, and any draft reports to be discussed at the meeting. The contractor shall submit the draft briefing book to ED three weeks prior to each meeting, After a three-day review by ED, the contractor shall revise the briefing materials as necessary and send them to all participants so that they receive it one week before each scheduled meeting. The contractor shall prepare and submit to ED summary minutes of the workgroup meetings 1 week after they take place. After a one-week review by ED, the contractor shall revised the minutes based on ED comments and submit a final copy to ED no later than two weeks after the workgroup meetings. Task 3Refine the Baseline Management Plan The contractor shall refine the baseline management plan submitted in the proposal to reflect topics discussed in the initial meeting with ED and items raised in negotiation no later than 4 weeks after the effective date of the award. The contractor shall include in the refined plan critical path diagrams, GANTT or PERT charts, including person loading charts by task. After the first year, the contractor shall refine and update the plan for subsequent years to incorporate refinements as needed. Task 4Refine Data Collection Plan The contractor shall refine the data collection plan submitted in the proposal to reflect the data collection described in subtasks 4.1 and 4.2 The contractor shall submit the refined plan no later than 4 weeks after the effective date of the contract. ED shall provide comments within 2 weeks of receipt of the refined plan and the contractor shall incorporate these comments into a final plan no later than 8 weeks after the effective date of the contract. Subtask 4.1Describe the Data Collection The contractor shall include in the refined plan a description of all data to be collected, methodology to be employed, and activities for each data collection. The plan shall also describe how this study links with other NATI studies. The contractor shall include a matrix or chart showing all data collection activities, when they will occur, the number of schools to be surveyed, the number of principals and teachers to be selected for the sample, and the rationale for the sample size. The contractor shall include another matrix that shows sources for each survey item, i.e. previous surveys. The plan shall also finalize case study topics proposed in the data collection plan submitted in the proposal. The contractor shall include in the plan a discussion of the procedures to be used to reduce participant burden and to obtain a response rate of 85 percent on all data collection instruments used in this study. Subtask 4.2 Describe Data Handling The contractor shall include in the plan a description of how the data collection instruments will be stored and maintained before, during and after school administration. The contractor shall include a description of the procedures to be used for compliance with the Privacy Act for all individual and institutional data collected in this study. The contractor shall also include in the plan a description of the data processing, coordination of studies and any other relevant issues, including information regarding expected costs, time, burden, and options. The contractor shall specify the procedures that will be used to ensure client confidentiality. The contractor shall maintain information that identifies persons or institutions in files that are separate from other research data and that are accessible only to authorized agency and contractor personnel. Task 5Prepare and Review Data Collection Instruments Subtask 5.1 Prepare Data Collection Instruments and OMB Package The contractor shall develop data collection instruments including computer-assisted telephone interview instruments (separate surveys for staff in schoolwide elementary schools, schoolwide secondary schools, targeted assistance elementary schools, and targeted assistance secondary schools), classroom observation protocols, focus group and in-person interview topics/protocols, and document collection forms no later than 8 weeks after the effective date of the contract. After a 2-week ED review, the contractor shall submit revised instruments, with an accompanying Office of Management and Budget (OMB) clearance package, to ED no later than 12 weeks after the effective date of the contract. ED will submit the OMB package to ED's Information Management Team for a one-month review prior to submission to OMB. The contractor shall revise the instruments and OMB package in response to comments received as part of the ED review process and shall continue revisions during the OMB review process over the course of approximately 3 months. Subtask 5.2Review Data Collection Instruments The contractor shall review all data collection instruments after each round of data collections to determine the extent to which the instruments adequately addressed the research questions and to determine which items within instruments need revision. The contractor shall recommend to ED what items or instruments if any need revisions. The contractor shall provide a written rationale for each suggested revision no later than 4 months after each data collection cycle. After a two- week review by ED, the contractor shall prepare a copy of the revisions for submission to OMB for approval no later than 5 months after completion of each data collection. Subtask 5.3 Reproduce data collection instruments Upon approval by OMB, the contractor shall reproduce sufficient data collection instruments for each respondent in each school in each category for each data collection. During the first year of the award, the contractor shall submit to ED, prior to the reproduction, a mock-up of each instrument no later than 1 week after OMB approval. Task 6Select and Notify Sample and Relevant Organizations Subtask 6.1Select Samples Based on case study topics proposed in the proposal, the contractor shall prepare a list of 25-30 schools from which approximately 10-15 shall be selected for site visits no later than 10 weeks after the effective date of the award. After a 2-week ED review, the contractor shall develop the final list of schools for site visits no later than 14 weeks after the effective date of the award. Possible candidates for site visits could be schools identified for improvement that are embarking on comprehensive improvement efforts, schools serving high proportions of Limited English Proficient, migrant, or Native American students, secondary schools, schools in districts that are in different phases of reform, schools in States with well-developed assessment systems that are aligned with content and performance standards. The contractor shall use the districts selected for the District Survey as a district sampling frame from which to identify schools that will comprise a portion of the representative sample for this study. The contractor shall add a sufficient number of schools to those selected from the district sample to make the school sample nationally representative. A 2-step process shall be used in which the contractor contacts the districts to request lists of schools receiving Title I funds and subsequently draws the sample from those schools and adds schools for representativeness. As indicated earlier, the sample will be stratified based on poverty level, urbanicity, schoolwide versus targeted assistance status, and school improvement status. It is possible that individual schools in the sample may shift their programs from targeted assistance to schoolwide, or vice versa, over the life of the contract. Separate samples shall be drawn for schools serving a significant proportion of migrant , limited-English proficient, and Native American students. A separate sample shall also be drawn for schools identified for school improvement under the Title I provision for schools not making adequate progress toward meeting standards for two years. The contractor shall select the samples no later than 10 weeks after the effective date of the contract. Subtask 6.2Notify Samples The contractor shall prepare notification letters and information packets for the schools selected for all of the samples. The contractor shall include in the letters and information packets general information on the study as well as specific information on the data collection schedule and plans, a discussion of the importance of the study, its purposes, products, scheduled data collection and sample, provisions for maintaining anonymity of survey participants, data security, the organizations and persons involved in the study, and the benefits to be derived from the study. The contractor shall submit draft letters and all other notification materials to ED no later than 12 weeks after the effective date of the award. After a one-week review by ED, the contractor shall revise the information packets as needed and print sufficient copies no later than 15 weeks after the effective date of the award. As part of the notification packet, the contractor shall prepare a non-technical tri-fold brochure describing the study that is suitable for distribution to a broad audience of policy makers, educators and managers of education programs. The contractor shall submit this draft summary to ED no later than 4 weeks after the effective date of the award. After a two-week review by ED, the contractor shall revise the brochure as needed and have 250 printed, no later than 15 weeks after the effective date of the award. The contractor shall provide 50 copies to ED and include the remainder in the dissemination packets. Subtask 6.3Obtain EIAC Cooperation The contractor shall prepare and submit to ED a draft letter to the Education Informational Advisory Council (EIAC) state coordinators identified by ED to seek EIAC support for the evaluation no later than 12 weeks after the effective date of the award. ED staff will revise the letter for the Under Secretary's signature no later than 13 weeks after the effective date of the award. The contractor shall mail the approved letter and related materials to the state coordinators identified by ED no later than 14 weeks after the effective date of the contract. Subtask 6.4Notify State Education Agencies The contractor shall prepare a letter to participating Chief State School Officers announcing the study and site visits, with a copy to all identified state contacts for Goals 2000 and federal categorical program implementation no later than 12 weeks after the effective date of the award. After a 2-week review by ED, the contractor shall mail the approved letter and related materials to the Chief State School Officers in participating states no later than 15 weeks after the effective date of the award. Subtask 6.5 Notify Superintendents and Appropriate School Personnel The contractor shall prepare a letter to all school superintendents and appropriate school personnel in the local educational agencies and schools included in the sample announcing the study, with a copy to all identified state contacts for Goals 2000 and federal categorical program implementation. The contractor shall also include in the letter that a letter will be written to participating principals to be followed by telephone calls. The contractor shall submit the letter to ED within 12 weeks after the effective date of the award. ED staff will revise the letter for the Under Secretary's signature. The contractor shall mail the approved letter and related materials to Superintendents and school personnel no later than 15 weeks after the effective date of the award. Task 7Collect Data Subtask 7.1Conduct Site Visits The contractor shall contact the selected schools and make arrangements for visits to take place during the 7th and 8th months after the effective date of the award. Each visit shall be conducted by 2-member teams and shall last approximately 4 days, including travel. The visits shall include in-person interviews of school staff and district administrators, document collection, classroom observation, and attendance at any school-sponsored special events. The site visits may also include focus group meetings with teachers, non-teaching school staff, and parents. A second round of follow-up site visits for the case studies shall take place during the 13th and 14th months after the effective date of the award. Subtask 7.2Administer Survey and Conduct Interviews The contractor shall administer the fall 1998 CATI survey no later than 13 months after the effective date of the award to Title I principals in schoolwide programs and targeted assistance programs; Title I teachers and teacher aides in targeted assistance programs; classroom teachers in targeted assistance schools; and teachers in schoolwide programs. As part of the survey, the contractor shall ask respondents to provide copies of school documents, as mentioned earlier. These will include schoolwide and targeted assistance planning documents, parent-school compacts, school performance profiles and other relevant assessment information such as report card formats, testing results, and other outcome data, and school improvement plans. The contractor shall administer the second set of surveys through the CATI no later than 25 months after the effective date of the award. The contractor shall collect updates of any school documents that have changed. The purpose of the second round of data collection is to track progress on implementing standards-based reforms to improve student learning and using the Title I program to support these changes. Task 8Process and Analyze Data The contractor shall develop coding materials for entering and preparing for analysis the data collected as it is received. The contractor shall develop a system to efficiently and accurately obtain the needed data from the files and then put the data in a form that can be accessed by computer. The contractor shall place the abstracted data in a computer-accessible format. To ensure accuracy, the contractor shall verify all key data entered, conduct edit and consistency checks and include response rates. The contractor shall resolve problems identified in this process through phone calls to the respondents. The contractor shall include information on the status of this task in each monthly progress report. Subtask 8.1 Analyze Data The contractor shall analyze the data in the manner described in the approved Data Analysis Plan. The contractor shall prepare preliminary write-ups of each spring 1998 case study, plus a proposed outline for a summary report on the case studies no later than 11 months after the effective date of the award. After a 2-week ED review, the contractor shall prepare revised write-ups and a revised summary outline no later than 12 months after the effective date of the award. For the spring 1999 case studies, the contractor shall prepare preliminary write-ups and a summary report outline no later than 23 months after the effective date of the contract. After a After a 2-week ED review, the contractor shall prepare revised write-ups and a revised summary outline no later than 24 months after the effective date of the award. For the fall 1998 CATI data collection, the contractor shall prepare preliminary tabulations no later than 18 months after the effective date of the award. After a 2-week ED review, the contractor shall prepare revised tabulations no later than 19 months after the effective date of the award. For the fall 1998 CATI data collection, the contractor shall prepare preliminary tabulations no later than 30 months after the effective date of the award. After a 2-week ED review, the contractor shall prepare revised tabulations no later than 31 months after the effective date of the award. Subtask 8.2 Prepare Special Tabulations The contractor shall provide special tabulations to be specified by ED in addition to tabulations required for reports in Task 9 as requested. ED expects the production of no more than 100 special tabulations. ED will use these tabulations to complete policy relevant documents and to use in the National Assessment of Title I reports. Task 9Prepare Reports Subtask 9.1Prepare Interim Study Reports The contractor shall provide 3 interim reports, one after each of the first 3 data collections. The contractor shall prepare the first draft of a summary case study report, with the individual write-ups as possible appendices, no later than 13 months after the effective date of the award. After a 4-week ED review, the contractor shall prepare a revised draft of the case study summary report no later than 15 months after the effective date of the award. After a 2-week ED review, the contractor shall submit 10 copies of the final version no later than 17 months after the effective date of the award. The contractor shall prepare the reports in a format suitable for dissemination to Congress, participating schools, LEAs and SEAs and others in the educational community. The contractor shall write each report for a non-technical audience, include an executive summary of the key findings, and include illustrative charts and tables. The contractor shall prepare the first draft of the second summary case study report, with the individual write-ups as possible appendices, no later than 25 months after the effective date of the award. After a 4-week ED review, the contractor shall prepare a revised draft of the case study summary report no later than 27 months after the effective date of the award. After a 2- week ED review, the contractor shall prepare a final version of the case study summary report no later than 28 months after the effective date of the award. The contractor shall submit an outline of the fall 1998 survey data collection report to ED no later than 19 months after the effective date of the award, and after a two-week review by ED, the contractor shall submit a revised outline no later than 20 months after the effective date of the award. The contractor shall submit the first draft of the fall 1998 survey report, based on the revised outline, no later than 21 months after the effective date of the award. After a 4- week review, by ED, the contractor shall submit a second draft no later than 23 months after the effective date of the award. After a 2-week review by ED, the contractor shall prepare a final report and submit to ED 10 copies no later than 24 months after the effective date of the award. Subtask 9.2 Prepare Final Report and Executive Summary The contractor shall prepare a final report, not to exceed 100 pages and an executive summary not to exceed 20 pages, summarizing the findings of the national study. It will include findings from all of the data collections, including the fall 1999 CATI survey, which will not have its own report. The contractor shall include in the final report descriptive and analytic information that answers research questions associated with the evaluation objectives. The contractor shall write the final report and the executive summary in a manner suitable for distribution to a broad audience of policy makers, educators, administrators of educational programs and parents. The contractor shall submit a descriptive outline for the final report no later than 30 months after the effective date of the award. After a two-week review by ED, the contractor shall submit a revised outline no later than 31 months after the effective date of the award. The contractor shall submit a first draft of the final report no later than 32 months after the effective date of the award. After a 4-week ED review, the contractor shall submit a second draft no later than 34 months after the effective date of the award. After a 2-week ED review, the contractor shall submit a third draft no later than 35 months after the effective date of the award. After a final 2-week ED review, the contractor shall submit 10 copies of the final report no later than 36 months after the effective date of the award. Task 10Feedback to Participants and Policy Audiences Subtask 10.1 Provide briefings The contractor shall provide 2 briefings per year to a variety of groups, including the Independent Review Panel for the National Assessment of Title I, Department staff, Congressional staff, and other education organizations and associations in order to inform policy makers and administrators at different levels of government as well as the public about the progress of the study. The contractor shall develop briefing material that is non-technical and appropriate for the general public. These briefings will be held in the Washington, DC area and scheduled by the Department. Subtask 10.2 Disseminate reports and executive summary The contractor shall disseminate the documents described in Task 9 to all SEA, LEA, school personnel, and parents of students that participate in the study as they are released by ED. Subtask 10.3 Make presentations at professional and practitioner conferences The contractor shall submit proposals for no more than 4 staff to conduct presentations at approximately 4 professional and/or practitioner conferences during the second and third years of the contract. The contractor shall obtain the information on proposal requirements and deadlines from each professional and/or practitioner organization. The contractor shall submit to ED a list of conferences they would like to attend and a draft of the proposal for each conference and receive approval by ED before the submission. For each presentation the contractor shall submit the material for presentation to ED for approval. The contractor shall not present study findings from reports or tabulations that have not been reviewed by ED and transmitted to Congress. The contractor shall present only methodology for those studies that have not been reviewed by ED and transmitted to Congress. Task 11Archive Data Subtask 11.1 Prepare Public Use Data Tapes The contractor shall prepare annually data tapes that can be formatted to the NCES Electronic Codebook (ECB). The contractor shall discuss with NCES at the beginning of the contract and prior to developing codebooks, the most efficient way of recording information so that additional costs do not need to be incurred in order to fit the specifications of the ECB. (See Subtask 3.2). The contractor shall schedule the meeting with NCES no later than 7 weeks after the effective date of the award. Subtask 11.2 Transmit the Data Tapes to ED Upon completion of the study and ED transmission of the final report to Congress, the contractor shall provide hard copy and electronic medium copies of the data set, code books, technical reports and other study materials to an archival site to be approved by ED for public dissemination. The contractor shall ensure that the archived materials are in compliance with the Privacy Act. The contractor shall complete this task no later than the end of the contract. Task 12Follow Standards for Education Data Collection and Reporting The contractor shall conduct all data collection and reporting in accord with the Standards for Education Data Collection and Reporting developed for the National Center for Education Statistics, U.S. Department of Education unless otherwise approved by ED. Task 13 Establish Contractor Performance and Measurement System The contractor shall establish an internal Performance Measurement System (PMS) with the capacity to:  Identify problem areas by order of importance;  Identify anticipated schedule slippage and cost overruns; and  Provide means of determining where project managers and resources are deployed to assist more critical tasks. This information shall be included in the monthly progress reports. The progress report shall include both yearly and cumulative contract costs by task and for the full study. The contractor shall provide an operating PMS within 1 week of the effective date of the award. Timelines and Activities/Deliverables The contractor shall meet the following schedule (due dates are calculated from the effective date of the award). Except when specified, all deliverables should be sent through e-mail, with one hard copy submitted to the COTR and 1 hard copy to the Contracting Officer. All deliverables, to the greatest extent practical, and where applicable, shall be accompanied by a 3 « inch (or metric equivalent) diskette compatible with Word Perfect for Windows version 6.1 or above, SPSSPC and/or Lotus 1-2-3 at a version/release to be specified by the Department at contract award, which shall include the complete text of the document and data files. In addition to the "hard copies" indicated below, all documents are also to be available to ED by the due date over the electronic network. All deliverables shall include, as one of the required number of copies, a camera-ready copy with a cover page which identifies it as such. Also, the contractor shall include, as one of the required number of copies, a copy for the Contracts Office. Schedule of Deliverables Task Number Deliverable Due Date Number of Copies 1.1 Draft meeting minutes 2 weeks 5 Revised meeting minutes 4 weeks 5 1.2 Draft quarterly progress meeting agenda 2 weeks before each meeting 3 Revised quarterly progress meeting agenda 1 week before each meeting 5 Draft quarterly progress meeting summary 1 week after meeting 5 Revised quarterly progress meeting summary 2 weeks after meeting 5 2.1 List of proposed workgroup members 1 week 5 2.2 Draft agenda for workgroup meetings 3 weeks before each meeting 5 Revised agenda 1 week before each meeting 10 Draft briefing materials 3 weeks before each meeting 5 Revised briefing materials 1 week before each meeting 10 Workgroup meeting draft summary 1 week after each meeting 5 Workgroup meeting revised summary 2 weeks after each meeting 10 3 Baseline management plan 4 weeks 3 4 Draft data collection plan 4 weeks 5 Revised data collection plan 8 weeks 10 5.1 Draft data collection instruments 8 weeks 5 Revised data collection instruments 12 weeks 5 OMB package 12 weeks 3 Revised data collection instruments 4 months after each data collection 5 Revised OMB package 5 months after each data collection 3 Draft list of case study schools 10 weeks 5 Revised list of case study schools 14 weeks 5 6.2 Draft notification packet 12 weeks 3 Revised notification packet 14 weeks 5 Draft study brochure 4 weeks 5 Revised study brochure 14 weeks 50 6.3 Draft letter to EIAC 12 weeks 3 Revised letter to EIAC 13 weeks 3 6.4 Draft letter to CSSOs 12 weeks 3 6.4 Revised letter to CSSOs 15 weeks 3 6.5 Draft letter to superintendents and schools 12 weeks 3 Revised letter to superintendents and schools 15 weeks 3 8.1 Preliminary spring 1998 case study write- ups, report outline 11 months 5 Revised write-ups, outline 12 months 5 Preliminary spring 1999 case study write- ups, report outline 23 months 5 Revised write-ups, outline 24 months 5 Preliminary tabulations for Fall 1998 18 months 5 Revised tabulations for Fall 1998 19 months 5 Preliminary tabulations for Fall 1999 30 months 5 Revised tabulations for Fall 1999 31 months 5 9.1 First draft of spring 1998 case study summary report 13 months 10 Second draft of spring 1998 case study summary report 15 months 10 Final version of spring 1998 case study summary report 16 months 10 Draft outline for fall 1998 survey report 19 months 5 Revised outline for fall 1998 survey report 19 « months 5 First draft of fall 1998 survey report 20 « months 10 Second draft of fall 1998 survey report 22 months 10 Final version of fall 1998 survey report 24 months 10 9.2 Draft outline for final report 30 months 5 Revised outline for final report 31 months 5 First draft of final report 32 months 10 Second draft of final report 34 months 10 Third draft of final report 35 months 10 Final version of final report 36 months 10 11.2 Data tapes End of contract 1 General Instructions for Technical Proposal The offeror is expected to show a thorough understanding of the legislation and the issues involved in this evaluation. While this is a data collection and analysis contract and the intent of each task is spelled out in the work statement, it is up to the offeror to propose the most effective method for carrying out these tasks. The offeror is expected to address all the tasks specified in the Work Statement. Assured results in terms of obtaining high quality data, feasibility, and cost-effectiveness are all of the utmost importance. The specifications contained in the RFP are a starting point: the proposal should build on them, not simply repeat language from the RFP. The proposed plan must be written in enough detail that a review panel can adequately judge its full merits. The panel will not make assumptions or guesses, hence, proposals should not be vague. Technical proposals must be limited to a maximum of 100 double-spaced pages, in addition to resumes and tables. Offerors should not put substantive materials in the appendix in order to subvert the 100 page limit. Offerors are requested to follow the proposal format and content suggestions detailed below in preparing their technical proposals. . Abstract . Table of Contents . Introduction. The Introduction should briefly describe the offeror's overall plan to achieve the study's objectives, scope of work, intended products, and applicability of these products. . General Approach. The General Approach section should describe the offeror's overall plan to achieve the study's objectives, including the data collection plan, data analysis plan, anticipated problems and the means for overcoming them, and in what ways the offeror's proposed approach is unique. The offeror shall also describe in this section how the design, sample, implementation and data analysis of the study will complement and be coordinated with ongoing research conducted for the National Assessment of Title I and for other studies examining the impact of systemic reform. . Baseline Management Plan. The plan should provide a listing and description of each task. The tasks should be listed in order of substantive relationship, or in order of chronological completion dates. Indicate the names of key personnel for each task as well as person days to be allocated for each person to each task. As appropriate, indicate significant non-personnel resources to be applied to each task. Identify potential and/or anticipated problems, and suggest proposed variations in the design of the study which may facilitate successful completion of the study's tasks and objectives. Describe any sub-contractual arrangements including work to be done, responsibilities for tasks, reporting arrangements, and any other terms of the agreement. A letter from the proposed subcontractor delineating the nature of the agreement shall be included. Provide a description of the proposed ADP security and standards program, in compliance with the provisions of the ADP Security Manual (ACS HB-No. 6) and the ED ADP standards. . Related Corporate Experience. The Related Corporate Experience section should describe briefly the experience of the offeror and any proposed subcontractors in conducting studies of a similar or related nature. Short abstracts of related work should include the name, current affiliation, and current telephone number of the study's project officer. Corporate facilities, including hardware and software, useful to completion of this study should also be described. . Proposed Staff. An organizational chart should be included to show lines of authority and responsibility in the conduct of the project. The Proposed Staff should indicate clearly the relationship between past staff experience and proposed task assignment for this study. To conduct and complete this contract successfully, the contractor must provide staff who together have expertise in the following areas: research in compensatory education, in particular Chapter 1/Title I; conducting research in public schools; analyzing longitudinal data; analyzing observational data; combining qualitative and quantitative data; developing management plans for and successfully conducting large scale research activities at multi-site studies. In order to complete this contract successfully, offerors will need to propose a Project Director and supporting staff who have considerable expertise and success in conducting Chapter 1/Title I policy relevant studies. ED recommends that the Project Director spend a minimum of 50% of his or her time on this study. The Project Director shall be responsible for keeping ED informed of all major decisions or events likely to affect project performance or products. The supporting staff should include senior and junior level staff. A detailed resume should be included for all proposed professional staff for the study. A letter of commitment from all proposed consultants and professional staff not currently employed by the offeror should be included. The need for any proposed consultants shall also be fully justified, including how they would contribute to the project. Consultant resumes shall be provided. The offeror shall outline all contractual obligations existing for each major staff member proposed during the anticipated course of the award. The funding sources for other projects and the percentage of time allotted shall also be provided. . Authors of Proposal. The senior author and coauthors of each section of the Technical Proposal shall be identified by name, and their proposed role in the project identified. . Protection of Human Subjects Offerors must be cognizant of the requirements of the Department of Education's regulation on protection of human subjects of research. This regulation was published in THE FEDERAL REGISTER on June 18, 1991 as "The Federal Policy for the Protection of Human Subjects: Notices and Rules". The regulation is under Title 34, Code of Federal Regulations, Parts 97, 350 and 356. They must also be cognizant of the requirements of the "Additional Protections for Children" under Title 45, Code of Federal Regulations, Part 46, Subpart D. Offerors shall include in their proposal a statement that they are cognizant of these requirements and shall comply with them as necessary. Instructions to Offerors Keyed to Tasks Task 1 Communications with ED For purposes of bidding, for all subtasks under Task 1 requiring meetings at ED, the offeror shall include no more than 4 contractor staff unless otherwise noted. The individuals selected to attend should be key task and/or management leaders. Task 2 Utilize Outside Expertise on Ongoing Basis The offeror shall include in the proposal a proposed list of 8-12 work group members. The list shall include names, affiliations, areas of expertise, and an explanation regarding their potential contribution to the study. The offeror shall include in the list technical experts and practitioners who have either analytic, programmatic experience, or both. The offeror shall draw at least half of the proposed technical experts from the list of IRP members in Appendix I. Task 3 Refine Baseline Management Plan The offeror shall include in the proposal a work and management plan. The plan shall include the following: . Critical path diagrams showing overall development of the evaluation. These diagrams shall depict a timetable and staff allocation for each product of this contract. The diagrams shall also include a breakdown of subtasks necessary for timely completion of the products/deliverables. . GANTT or PERT charts and a narrative detailing the allocations of professional offeror staff, including person-loading charts showing staff allocation by task, the sequence and flow of work and the contingency measures to be implemented in the event that processing backlogs occur; including specifically, person- loading charts indicating not only staff allocations by project but also percentages of staff time allocated to each task. The offeror shall include in the charts all key staff members such as analysts and programmers, whose functions are essential to the timely development and performance of project tasks. . Descriptions of functional responsibilities, lines of communication and lines of authority for senior project staff and project management. . The offeror's outline of planning, scheduling, monitoring, controlling and reporting processes. . The master schedule for deliverables. . Offeror's methods for managing the fluctuating daily transaction volumes associated with each processing cycle. . Offeror's plan for corporate review and oversight of the project, including administration and fiscal management and control. . Established standards and measures to track performance and to ensure quality. . The offeror shall include procedures to assure that essential tasks will be accomplished and activities will continue uninterrupted even if there is a protracted absence of key personnel. The offeror shall include procedures for alerting ED in the event of problems of varying degrees of importance. Task 4 Refine Data Collection Plan If the offeror can suggest improvements and/or more cost-effective strategies for collecting the information needed, these should be clearly spelled out in the proposal. The offerer shall include in the proposal a data collection plan that identifies all data elements to be collected. The offeror shall also identify in the proposal how the data to be collected in this study will be linked to other current or planned data collections. The offeror shall also include in the data collection plan a time frame identifying what data elements are to be collected within each time frame. The offeror shall also include in the data collection plan a detailed description of the process for coordination of multiple data collection sites and mechanisms for ensuring quality control and timelines. The offeror shall include in the data collection plan the procedures that will be used to ensure respondent confidentiality. The offeror shall include all procedures they intend to employ to reduce burden and to obtain a high response rate. Task 6 Select Sample The offeror shall propose case study topics and suggest sources for nominations of schools for case study sites. Task 8 Process and Analyze Data The offeror shall include in the proposal the procedures they intend to use to process the data in a timely manner, provide checks for accuracy, and improve, if needed, the response rate. The offeror shall also include in the proposal evidence of having successfully done this type of task before. The offeror shall include a timeline to include receipt of data, processing data (including cleaning data tapes and verifications) and providing cleaned data which can be analyzed. Task 9 Reports An important factor that affects costs and timeliness is the extent to which the contractor provides high quality reports that need little or no revisions. The offeror shall include a description of how they intend to provide quality control. The offeror shall provide evidence of having done this successfully in the past. Summary of the Title I/ESEA Independent Review Panel Meeting December 16 and 17, 1996 Executive Summary The seventh meeting of the Independent Review Panel (IRP) was held on December 16-17, 1996. The meeting agenda included: (1) a discussion of the subgroup's role (2) an overview of Planning and Evaluation Service (PES) evaluations; (3) preliminary findings of the Baseline District Survey conducted for the Federal Implementation Study, followed by a discussion on evaluating the impact of federal support in the context of state and local reform; (4) subgroup meetings; (5) discussion of the next interim report of the National Assessment of Title I; and (6) updates on the evaluations of the Drug-Free Schools and Community Act, School Violence Prevention, and the School-to-Work programs. Meeting Overview and Discussion of the Subgroups' Role The panel's initial discussion focused on the role of the three subgroups--implementation, outcomes, and cross-cutting issues. PES staff explained that the subgroups could assist PES as it develops and conducts several large-scale evaluations. The panel also discussed the need to evaluate Goals 2000, and asked what the Department planned to do to examine how the program's funds were being spent. The Goals 2000 program office would have some of that information collected in January; data are also being collected on Goals 2000 in all of the Department's cross-cutting studies. In addition, the panel discussed funding of Title I evaluations and other program evaluations. The panel's letter/report to Congress last year had focused on funding issues and likely had some impact on the decision to appropriate $7 million more than expected for Title I evaluation. It was suggested that a subgroup of the panel and ED staff could meet with Congressional staff to discuss the need for maintaining a stable source of funding for program evaluation. The panel also discussed the possibility of foundations as a resource for evaluation of systemic reform. Panel members offered several suggestions concerning the best way to collaborate with foundations, such as: (1) meet with foundations involved with evaluation efforts to discuss how those evaluations can be linked with ED efforts; (2) develop concrete study proposals to submit to the foundations in question; (3) request that foundations evaluate the use of Goals 2000 and its relation to Title I and larger systemic reform efforts; and (4) let subgroups discuss possible linkages between ED's evaluations and those conducted by foundations. Planning and Evaluation Services Overview An overview of PES's current work was presented. Evaluations are addressing several questions about the use of student outcome data to guide improvement, such as the extent to which districts and schools have and use data. Another important topic is the use of research-based strategies for improvement, and the widespread use of program models that are not supported by evaluative evidence. A challenge for PES in designing all its evaluations is balancing qualitative and quantitative methods. Because the office is called upon to provide nationally representative data broken down in a number of ways, large samples of districts and schools are necessary; however, studies done on this scale do not provide the needed depth of understanding. As one example of a more qualitative method, PES intends to collect local plans over a period of time and to investigate whether the plans change. Panel members said policy makers should not assume states are moving forward with federally- supported reform efforts. In fact, many state plans report that reform efforts depend in large part on their state legislature's willingness to appropriate funds. "Title I is no better than the [over all] system." Therefore, the evaluations of the reauthorized Title I should be conducted in states that are supportive of systemic reform. Federal Implementation Study Staff of the Urban Institute presented preliminary results from the district survey conducted for the Federal Implementation Study, which surveyed districts regarding their understanding of standards- based reform and the usefulness of sources of assistance they receive. The survey addressed Understanding of Standards-Based Reform. The survey found that 90 percent of district respondents said they understood standards-based reform, but 25 percent said such reform would take little or no change to implement. Understanding of New Federal Provisions. Fewer respondents indicated a full understanding of new federal provisions. District responses in vanguard states indicated that they had a higher level of understanding than those in non-vanguard states. Differences related to district size. The survey results indicate that larger districts (25,000 or more students) have a greater understanding of federal provisions and standards-based reform than smaller districts (those with 300-2499 students). Sources of Information. When asked to rate sources of information that they receive in terms of helpfulness, respondents indicated that periodicals and publications were their most helpful. Respondents found information from ED less helpful. This is consistent with ED's strategy of disseminating information through intermediaries. Evaluating the Impact of Federal Support in the Context of State and Local Reform Findings were shared of a long-term study, conducted by the Consortium for Policy Research in Education (CPRE), in 12 states and 25 districts about the status of standards-based reform. The presentation included information on the good news about standards-based reform, as well as the modifications, changes, variations, and barriers standards-based reform has experienced in various locations. Summary of Subgroup Meetings (Detailed Subgroup summaries are attached) Implementation Subgroup The subgroup wanted to begin with a survey addressing implementation of the provisions of the law; this survey could use an embedded sample--picking states, then districts in those states, and schools in those districts. Next, there should be in-depth case studies. These could inform the design of a subsequent survey to gather national-level data on the patterns found in the case study sites. The group also discussed the areas that need to be addressed, noting that resource allocation patterns are one important area. The members discussed what implementation means and how to provide the context that shows Congress the wide variation in how ESEA drives change. The group agreed that the study should not assume particular models of change, but should use field-based research to find out what has changed. Outcomes Subgroup The subgroup decided that its name should be changed to the Program Results Subgroup. Members reached agreement on two general statements: (1) As we do analysis, we need to speak to a broader audience--policy makers and the public; and (2) We need to ensure that outcome studies are of high quality and address important issues; we want to continue to look at the design of such studies and synthesize the results. Cross-cutting Issues Subgroup The subgroup agreed that a detailed description is needed of how federal funds (from Goals 2000, Title I and other programs) are spent by states and districts, along with an explanation of how the programs relate to others and contribute to systemic reform. Discussion of the Interim Report of the National Assessment of Title I After reviewing the outline of the Interim Report, panel members questioned whether certain information would be included. Several members suggested that the report provide a summary of the provisions of ESEA, including an explanation of how the law changed under reauthorization; a graphic that illustrates all ED-sponsored studies (whether existing or planned) and how they relate to one another; descriptions regarding successes in implementation and providing whatever preliminary data exists; descriptions regarding what data or research will be in the final report that is not available now; an illustration that, adjusted for inflation, the appropriations for the evaluation are not that significant. This step is necessary because some members of Congress will be reluctant to provide additional funding in light of the Title I's increased appropriations for the last fiscal year. Evaluations of the Drug-Free Schools and Community Act, School Violence Prevention, and School-to- Work Programs Several presentations were made to the panel by researchers conducting ongoing evaluations of the Drug-Free Schools and Community Act, School Violence Prevention, and School-to-Work Programs. Evaluation of the Drug-Free Schools and Community Act Program Preliminary findings of an ED-sponsored study, conducted by RTI, of the effectiveness of the antecedent Drug-Free Schools and Community Act (DFSCA) program were presented. The study, which began in 1991, involved 19 districts that receive DFSCA funds. The districts were matched; each district with a comprehensive DFSCA program was matched with a district without a comprehensive program. Comprehensive programs were defined as those that had multiple services-- e.g., counseling, parent involvement, community involvement--and spanned grades k-12. Non- comprehensive programs were those that offered fewer of these services or those that focused on only a few grades. The study followed a cohort of children in grades 5 and 6 to grades 8 and 9. Study findings include: (1) drug use increased all four years; (2) children's tolerance for drug use increased over the duration of the study; and (3) children perceived other students as using more drugs than they actually were. These findings parallel national trends for this time period. Study on School Violence and Prevention Westat staff presented to the panel an outline of a study on school violence prevention efforts supported under the Safe and Drug Free Schools and Communities Act (SDFSCA) Program. According to the authors, the purpose of the study is to (1) contribute to the assessment of the SDFSCA Program; (2) validate SDFSCA performance indicators; and (3) provide an update on school violence and school violence prevention efforts. Researchers will use two studies to conduct the examination. Evaluation of School-to-Work Implementation Mathematica Policy Research, Inc., is conducting multiple studies to examine the implementation of school-to-work programs. These studies share several characteristics with evaluation of Title I. Both efforts involve (1) evaluating complex reform rather than narrow interventions; (2) examining a diversity of local approaches; (3) evaluation of implementation quality; and (4) dealing with change caused by multiple factors. Different stakeholders desire different outcomes from school-to-work programs. Mathematica has planned a five-year, three-part evaluation of STW programs. The evaluation involves these four objectives: document the progress of states and local partnerships in implementing comprehensive STW systems; identify promising practices and barriers to implementation progress; describe the participation of schools, employers, and other organizations in the creation and operation of STW systems and programs; and describe students' participation in STW activities and their transitions to post secondary education, training, and employment, as well as changes in participation and outcome patterns as implementation progresses. Studies include a Local Partnership Survey, In-Depth Case Studies, and a Study of Student Experience. Future Meeting Dates The panel agreed on the following dates for its 1997 meetings: March 3, August 7, November 10, 11 (The November meeting date has been changed to the 9-10). Summary of the Title I/ESEA Independent Review Panel Meeting August 8-9, 1996 Independent Review Panel (IRP) Members Present: Marilyn Aklin, National Coalition of Title I Parents Eva Baker, University of California at Los Angeles Joyce Benjamin, Oregon Department of Education George Corwell, New Jersey Catholic Conference Sharon Darling, National Center for Family Literacy Susan Fuhrman, University of Pennsylvania, Consortium for Policy Research in Education Jack Jennings, Center on National Education Policy Joseph Johnson, University of Texas at Austin Jessie Montano, Minnesota Department of Education Jennifer O'Day, Stanford University, School of Education Mary O'Dwyer, Oyler School, Cincinnati, OH Andrew Porter, University of Wisconsin at Madison Linda Rodriguez, Pasco County (FL) School Board Richard Ruiz, University of Arizona, College of Education Ramsay Selden, Education Statistical Services Institute Maris Vinovskis, University of Michigan Elaine Walker (for Beverly Hall), Newark (NJ) Public Schools U.S. Department of Education Staff Present: Joanne Bogart, Planning and Evaluation Service (PES) Melissa Chabran, PES Joe Conaty, Office of Educational Research and Improvement (OERI) Alan Ginsburg, Director, PES Bill Kincaid, Office of the Under Secretary Mary Jean LeTendre, Director, Compensatory Education Programs (CEP) Rick Lopez, Office of Bilingual Education and Minority Language Affairs Oliver Moles, OERI Valena Plisko, PES Julie Pederson, PES Jeff Rodamar, PES Elois Scott, PES Stephanie Stullich, PES Gerald Tirozzi, Assistant Secretary, Office of Elementary and Secondary Education (OESE) Susan Wilhelm, CEP Other Attendees: Tony McCann, House Appropriations Committee Kathy Stack, Office of Management and Budget Presenters: Pascal Forgione, Commissioner, National Center for Education Statistics (NCES) Arnold Goldstein, NCES Steve Gorman, NCES Gary Phillips, NCES Fumiyo Tao, Fu Associates Robert St. Pierre, Abt Associates Bayla White, OESE Bill Strang, Westat Meeting Overview The sixth meeting of the Independent Review Panel (IRP) for the National Assessment of Title I and the effects of federal programs on education reform, was held on August 8-9, 1996. The meeting focused on: (1) key implementation and targeted studies; (2) obtaining and reporting student outcomes; (3) the evaluation of other Title I program areas; and (4) subgroup activities and other issues. Implementation and targeted studies Staff from the Education Department briefed panel members on the status of four studies: (1) study of barriers to parent involvement; (2) evaluation of Title I services to private school students; (3) Title I targeting study; and (4) longitudinal evaluation of school change and performance. Questions and concerns raised by panel members will be incorporated in the study designs. In addition, the panel discussed current and planned state, local and school implementation studies, and provided advice on key questions, embedded sampling, and strategies for linking responses. During the panel discussion, members identified additional areas for ED to consider, including: the purpose of the studies, alternative methods of data collection, and cost. Obtaining and reporting student outcomes Staff from the National Center for Education Statistics (NCES), including Commissioner Pascal Forgione, briefed the panel on the use of NAEP data to evaluate Title I. The panel's reaction to the presentation centered around the lack of a poverty variable, parental background information, or identification of Title I students. While the panel did not reach conclusions about the use of NAEP data in Title I assessments, one panel member said NAEP could not help in the evaluation and that this is not surprising, since it was not designed to evaluate Title I. The panel chair invited Dr. Forgione to return to the next panel meeting to continue the discussion. The panel also deliberated over state and locally reported achievement results and the Title I federal/state collaborative. ED staff requested assistance from the panel in selecting data sources. Much of the discussion focused on how to interpret "adequate yearly progress" and what data from state assessments will help ED measure that. Panel members were concerned about the quality of most local or state assessments, and noted that state assessments are moving toward focusing on individual students, just as Title I is focusing on changes in the proportions of students who are meeting certain proficiency levels. An intense conversation followed on attribution, with panel members asking, "Do we want to say whether Title I works?" Given the flexibility of Title I, another panel member suggested that ED condition the public and Congress to accept a more complex answer, one that involves multiple measures of achievement. Evaluation of other Title I program areas Staff from the Education Department, along with investigators, briefed panel members on the status of evaluations for Title I-Part B, Even Start and Title I-Part C, Migrant Education. After learning of the lack of data on migrant students, two panel members suggested that ED include an oversampling of migrant students in the longitudinal study in order to form a more complete picture of these students and how well they are served. The presentation on data problems in migrant education also led to a broader discussion of the need for better databases. A panel member suggested that as a decentralized educational data system emerges, with a statewide system as its highest level, school systems need ways to exchange data and use their respective data systems collaboratively. Subgroup activities and other issues The panel reviewed the matrices developed by PES staff on the various ED studies and their characteristics. Panel members agreed to send in information about their own studies that correspond in some way to those listed. ED staff received tacit approval on the issues the panel will examine at the next meeting: national education goals, safe and drug-free schools, violence, waivers. One panel member suggested that PES staff could distribute briefing memos or papers to the panel on selected programs rather than spend time on presentations in the panel meetings. The panel discussed the types of subcommittees that should be formed and agreed that three subgroups were needed in the short term to focus on: (1) assessment/outcomes; (2) implementation; and (3) the "big picture" (identify what holes exist, provide a strategic view). A summary of the discussion is attached. The next meeting of the panel is planned for December 4-5, 1996. Meeting Summary The following is a summary of the discussions held at the sixth meeting of the Independent Review Panel (IRP) on August 8-9, 1996. The summary is organized around the key segments of the meeting agenda: (1) updates of key implementation and targeted studies, followed by a discussion of issues pertinent to these studies; (2) a discussion focused on obtaining and reporting student outcomes; (3) presentations and discussions of the evaluation of other Title I program areas; and (4) a discussion concerning subgroup activities and other issues. Meeting Overview and General Legislative and Budget Update Gerald Tirozzi, Assistant Secretary for Elementary and Secondary Education, briefed the panel on the legislative, budget, and programmatic issues the Department is currently engaged in, including the National Assessment of Title I; the increase in the number of schoolwide programs; waivers; the granting of Ed Flex authority to states; the status of Goals 2000; and state consolidated plans. Key Implementation and Targeted Studies--Presentation and Panel Discussion Study of Barriers to Parent Involvement Oliver Moles of OERI briefed the panel on the status of the study. He summarized the four foci of the report: barriers to parent involvement, early implementation of Title I provisions, examples of comprehensive LEA and school programs, and a summary of state efforts to support and encourage parent involvement. The study is focused on both elementary and secondary school levels. He explained that a consultant group--comprising research experts, practitioners, and parent representatives who are familiar with Title I--will review the study plans and progress. Additionally, he noted that the contractor is selecting sites through recommendations and nominations, looking specifically at parent involvement activities and student achievement data. These recommendations/nominations have come from consultants and the parent involvement literature, and the study will consider any suggestions from panel members. Profile information will be collected by telephone interviews. As the panel discussed the study, several members inquired about whether the contractors would profile programs targeting specific groups of parents or types of schools. These queries and suggestions included: Sharon Darling asked if any sites focused on family literacy, specifically those including school age and preschool age children. Dr. Moles responded that the study would include some programs for preschool children but would focus on those targeting kindergarten students and older. Joyce Benjamin suggested that the study consider programs involving the parents of students with disabilities. She observed that they are "the most effective group of parents I have ever seen. They are very effective and very smart people." Dr. Moles responded that he would follow up on this and talk with the contractor about that possibility. George Corwell inquired about both the involvement of parents from private schools in the consultant group and the inclusion of programs that serve private-school students in the sample. Mary Jean LeTendre of CEP said the study could solicit recommendations of district programs that involve private school parents. Joe Johnson noted that his concern is that the study address a wide array of parents with special needs, the ones who do not typically get involved. He mentioned parents with limited literacy, parents who don't speak English, parents in migratory situations, parents who are homeless. Dr. Moles agreed and responded that there are currently some of these nominations in our sets and he would welcome more. Andy Porter noted that some questions asked by the study have a good research base already, and asked what value this current study would add to what we already know about parent involvement in schools. Dr. Moles responded that the study will offer insight in two areas: (1) the study looks at early implementation of Title I provisions, such as the school-family compacts, from both the school and the parent perspective; (2) the study will provide fresh perspective on barriers for both schools and families, focusing specifically on some important concerns--time, resources, training. This may help to reframe the discussion and to keep from finger pointing. Also, the study does have some new data with respect to different dimensions of parent involvement that will be useful. Maris Vinovskis suggested to the group that the study, and the larger work of the panel, may be missing a broader question: where is intervention most important? Low-income children are hurt by their lack of learning experiences over the summer, according to his reading of the research. Yet he said that too much of what we do is school-oriented rather than child-centered. Dr. Vinovskis asked whether the study would contact libraries and museums to look at summer programs for parents as an important intervention point. Dr. Moles responded that the study had focused on school-year programs. Alan Ginsburg suggested that the larger issue of welfare reform raises several questions about educational opportunities for parents, how children spend their time, and how to open the job market. He added, "My sense is that we know what we want in this area, such as reading at home, but don't know how to produce it. We think the summer is important but we don't have evidence about interventions." Dr. Ginsburg concluded the discussion by suggesting that it would be useful to convene a group, summarize the data in this area, and plan a modest agenda to look more causally at interventions. Evaluation of Title I Services to Private School Students Val Plisko briefed the panel on the study and the changes made in response to the concerns expressed by the panel at its last meeting. In particular, ED has worked to balance the public and private school administrators' perspectives. The sample has been cut from 400 to 280 districts, and a survey of private-school personnel has been developed. She added that ED is relying heavily on the private school associations to solicit cooperation with the survey, and thanked Mr. Corwell for his tremendous support. Dr. Plisko stated that the study will address the changes in the law and the effects on private school student participation. Because the allocation is now based on the number of poor children rather than the number of educationally disadvantaged children, the study will examine district reactions and the possible effects on participation rates. The study will also look at: (1) consultation between public- and private-school officials; (2) the relationship between public school programs and parents of private school students; (3) patterns in service delivery, i.e., does the program use computer assisted instruction (CAI), face-to-face services, or a combination?; (4) the use of capital expense funds; (5) the strategies schools are using to identify students; and (6) the assessment of the students and programs. Policy Studies Associates (PSA) is preparing an issue paper for delivery at the end of August, and she invited comments from panel members on this paper. A survey is awaiting OMB approval and will be sent out in the fall, to be finished by April 1997. In response to a query from Marilyn Aklin, Dr. Plisko explained that the study will focus on students attending religious schools, especially those affiliated with the three largest sectors for Title I participation--Catholic, Lutheran, and Hebrew day schools. Mr. Corwell noted that it may be difficult to collect data on face-to-face instruction because the Felton decision does not allow for on-site instruction in religious schools. Ms. Aklin also inquired about the types of information the study would be seeking. Dr. Plisko responded that the study would examine several questions: How do private schools consult with public school representatives? How often? What do they consult about? To what effect? The study's intent, she said, is to compare how services are provided to public school students and private school students. Bruce Haslam of Policy Studies Associates (PSA) explained that PSA has identified 12 areas in which consultation occurs and the survey will ask who is involved in each area, including parents. Eva Baker raised a caution about reporting data on service provision in CAI vs. non-CAI categories. Given the variability in CAI instruction, she warned that the study should carefully analyze what is provided, so that incorrect inferences will not be drawn. Dr. Baker noted that she has done studies of CAI and that looking at the time logged does not give necessary information on instructional quality; Dr. Porter seconded this concern and noted this has been a problem with most studies of CAI. Dr. Plisko noted that the PSA issue paper will address this concern, and that ED is interested in the quality of services provided through technology. Elaine Walker asked whether the study would rely solely on surveys. Dr. Plisko said yes, but the issue paper would frame the overall discussion. Title I Targeting Study Stephanie Stullich briefed the panel on the status of the study, noting that data are currently being collected. The study looks at Title I targeting at the school level in a sample of 40 districts of both large and medium size and different poverty levels. The study analyzes data from the same districts for three consecutive school years. Some of the questions addressed by the study include: (1) Are funds concentrated in higher-poverty schools and in fewer schools since reauthorization? (2) Are middle and high schools served to a greater extent than before? (3) What are the effects of the minimum allocation rule? Has it affected the size of allocations? The study will include a national overview based on analysis of the 1993-94 Schools and Staffing Survey and the 1995-96 NAEP. Finally, the study will look at the impact of waivers on Title I targeting, given that most of the waivers are for Title I. Ms. Stullich stated that some preliminary data would be available this fall, national data available in spring 1997, and the final report released in September 1997. Susan Fuhrman asked several questions about the sample for the study, including: Would the sample be state-representative? Ms. Stullich said no, because few states have the necessary data. Would any state effects be apparent? Ms. Stullich noted that most requests for waivers are coming from Pennsylvania. Will the sample for this study be related to any of the other studies' samples? Ms. Stullich said not really. The study targeted the largest districts in the country, although some were not included because unfortunately they did not have school-level allocation data. In response to panel members' queries, Ms. Stullich and Ms. LeTendre responded to the whether study would address: Schoolwide programs. The study will compare schoolwide programs with targeted programs. Waivers. The study will look at waivers granted in two distinct time periods, 1995-96 vs. 1996-97. Additionally, another study that is looking at waivers may pick up some of the patterns and key issues. Longitudinal Evaluation of School Change and Performance Elois Scott briefed the panel on the status of the longitudinal evaluation, but cautioned that she could not discuss it in detail because the Request for Proposals (RFP) closed Monday (ED received 83 requests for the RFP) and ED is in the process of reviewing bids. She noted that ED relied heavily on input from the panel for revisions to the RFP, and is requiring that 50 percent of the technical working group for the study be members of this panel. She noted there are still some "sticky issues" to be resolved in the study design, and these include: Data collection and analysis. An OMB package will be ready in the next couple of weeks. Sampling. The RFP calls for a sample of 80 schools. The sample has not yet been drawn but will be purposive rather than nationally representative. Sampling decisions have been made in consultation with an IRP subgroup, which helped determine appropriate states (KY, KS, FL, MD, OR, and PA). Also, some districts considered on the forefront will be included in the sample; PES wants to look at both districts that have high participation in supporting school reform and those that are simply following the state lead. Classroom observations. What type of information should be collected to show that reform is actually taking place? At what periodicity should observations be done? The field visitors will collect artifacts such as school-family compacts. ED also wants to collect artifacts that provide information on students' work and classroom activities, but recognizes the difficulty in analyzing these artifacts and linking them to school reform. Student performance and assessment. The panel's subgroup on assessment helped consider whether student work, standardized tests, NAEP scores, or state assessments should be used. However, there is some tension because the Title I legislation says states should decide what assessment to use, but we need some common instrument for all students and schools. At this point, the study will include both: a common instrument and some local assessment information as well. Dr. Scott noted that a key question guiding the study is: What actually constitutes school reform? Therefore, it is important to decide what pieces of information should be collected within the entire school community to show whether reform is taking place. Several panel members voiced their concerns about the study, including: Ms. Aklin questioned whether the study would acknowledge that schools and states are at different stages in the reform process and that this affects student outcomes. Dr. Scott responded that ED is interested in looking at student achievement in places where Title I has been implemented as intended and in places that are not as far along in the process. The study will compare students' access to reformed curriculum and instruction across stages of reform and across poverty levels. Richard Ruiz asked whether the study will look at delivery and instructional change, as opposed to just student achievement. Dr. Scott responded yes, noting that the reform is played out in the classroom. ED is trying to collect information that shows classroom effects, as well as how they change over time, she said. But the end goal is that students' outcomes will be affected. ED is stressing that the impact of new Title I legislative changes cannot be assessed during this three-year period, because many will not be implemented yet. This study will show where reforms are moving. Noting the importance of this study, Dr. Vinovskis suggested that if the study does not follow cohorts of students over time, then the study is useless. Dr. Scott explained that the RFP calls for assessment of the fourth grade classes in each year, with the addition of third grade classes in the first year and fifth grade classes in the third year. This will allow the study to follow a cohort over a three-year period as well as to assess three successive fourth grades. Dr. Vinovskis also suggested a review of the Prospects study and its implications concerning student mobility. Dr. Johnson observed an important focus of the study should be the organization of instruction, particularly in Title I schoolwide programs. He predicted the study should find less duplication of services, greater coordination of programs, and less fragmentation. He hoped the study would identify such changes, and would be able to capture what is driving those changes, recognizing that it may not be the legislation but other factors. This could help inform policy and technical assistance. Linda Rodriguez agreed, noting that other factors, such as a health facilities grant or innovative use of migrant education funds, might explain why reforms happen faster in some schools than others. Mr. Jennings cautioned that the program has been hurt in the past by single sentences in evaluations that say, "The Title I program hasn't raised achievement." He noted that the program is enormously flexible rather than prescribing any particular instructional approach. Therefore, the evaluation must identify what the state or school has done to raise student achievement. In general, the analysis needs to be more sophisticated than in the past. Dr. Scott responded that ED will try to link instructional practice to outcomes with this study. In concluding her briefing, Dr. Scott informed the panel that the contractor will have the opportunity to revise the data collection design, with input from the IRP. Additionally, she noted ED that recognizes the burden of this study on schools, and will give each schools a computer (linked to ED by e-mail) as an incentive to participate. Dr. Ruiz suggested that this study should be connected to the other studies IRP has been briefed on. Dr. Scott responded that ED was moving in that direction and collaborating on the issues and items that are being evaluated. Jennifer O'Day added that the panel needs to identify themes and issues (resource allocation, professional development) that cut across all of the studies, as well as how they are addressed in the studies. She suggested that the panel develop ways to make substantive connections between the studies. Dr. Ginsburg concurred and handed out a cross-cutting evaluation framework. Discussion Regarding the State, Local and School-level Implementation Studies, Focusing on Key Questions, Embedded Sampling, and Strategies for Linking Responses After distributing an evaluation framework for current and planned PES studies, Dr. Ginsburg explained that ED is conducting or planning two sets of studies: (1) A set at the district level centering on awareness and understanding of the new legislation. These are asking districts, "Where are you having the most trouble implementing the legislation? Where do you need assistance? Where are you getting technical assistance from?" "We are trying to see what the customers actually think they are getting. This is a baseline and we want to use it to improve," he stated. Results will be available in the fall. (2) Implementation studies. An in-depth study like the longitudinal study of schools is not affordable on a nationally representative basis. Lacking the money to do testing or onsite classroom reviews on a national basis, PES still needs a way to talk about what is happening in Title I as a whole: is classroom instruction changing in response to standards? is professional development aligned? Thus, PES plans to go out with studies that have linked samples. One study is already looking across programs in all 50 states. Large, representative samples of districts and schools will respond to questions about awareness, implementation, and outcomes, with some questions that replicate those asked in the more in-depth studies. Dr. Ginsburg and Joanne Bogart presented some specific questions for the panel to address: 1 - Originally ED did not plan to collect student outcomes data, because of concerns about its uniform availability and its possible unreliability. Does the panel agree with this decision? 2 - Drawing embedded samples. Should the studies overlap? Should they target particular states such as the Ed Flex states? 3 - Is there any hope for states funding a within-state-representative sample? ED could develop the tools for the states to use. 4 - Can ED get data on instructional changes? Is content exposure changing? Can we use any of Porter's instruments on a large scale? As the panel discussed these issues, several panel members noted other larger concerns that need to be addressed as well. These include: Function of the studies. Dr. Baker suggested that the panel frame the discussion by thinking about the function of studies, e.g., capacity building. "It has to be clear to people why you're doing these--either to be helpful or to be ... objective," she said. Dr. Johnson also spoke to the importance of the formative purposes of the studies. "If we find educators are not aware of the flexibility provisions, such as the waiver authority, it would be difficult to say whether the authority was effective. The studies have to help states, technical assistance providers, and others to have their practices informed further by this," he suggested. Data collection methods. Dr. Fuhrman inquired if the sole method of data collection on implementation would be surveys. Dr. Ginsburg responded that the survey would include collecting profiles, report cards, and other information, but that PES could not afford site visits. The surveys are going to have an embedded sample that will link to the different levels. Dr. Fuhrman then cautioned that previous implementation research shows, "you have to be there and talk to a lot of different folks to put the implementation story together. I would feel much better if Elois's study would be connected to state level field work as well. After this year, you will have no more state level data collection," she noted. Cost issues. After noting that he liked the idea of coordinating the surveys and studies in order to have a more representative sample, Dr. Vinovskis cautioned PES to consider alternative uses of funds. He said that student outcome data may be available, but they may be very expensive to collect; given that, would a better use of funds be extending Dr. Scott's sample? Respondents. Dr. O'Day questioned whether the appropriate respondents were selected. "I'm concerned about just going to superintendents and principals in districts," she cautioned. Just as the field work should be broad-based, the surveys should go out to a variety of people. Ms. Bogart noted that ED has been trying to track research efforts of many of the panel members to identify overlap in samples. Dr. Ginsburg asked the panel if overlapping was a plus or minus. Dr. Fuhrman responded that if some of the studies do not overlap, then the studies will have no substantive state-level implementation information. "You have to make some sense of the surveys and make sense of the data," she noted. Obtaining and Reporting Student Outcomes Use of NAEP for Evaluating Title I Pascal Forgione, Commissioner of the National Center for Education Statistics (NCES), along with Steve Gorman and Arnold Goldstein, briefed the panel on the feasibility of using NAEP data to evaluate Title I. Dr. Forgione stated that NCES was very interested in working as a partner with PES in this area. "We would like to ask for a more ongoing involvement with you. We need to make assisting you a priority and would like to put someone on your technical advisory group and be of some help," he noted. Mr. Goldstein presented a brief overview of a paper that provided basic background on what NAEP collects about Title I students and raised some issues about identification and services. After the presentation, the panel's questions and discussion centered around the lack of both a poverty variable and other parental background information that would aid in analyzing Title I students. Poverty variable. NCES, in the next administration of NAEP, will be able to identify students as recipients of free and reduced price lunch for the first time. Several panel members noted the importance of having some mechanism or variable for determining the poverty level of students. However, Dr. O'Day also acknowledged that collecting any type of socio-economic status information is a very heated political issue, and cautioned the panel to recognize that NCES should collect the best information possible in this political climate. Parent background. Several panel members also discussed the need for additional contextual information about parents and students. For example, one area of interest, parental education, is currently collected through student reports, which are not considered reliable. Gary Phillips of NCES noted that the NAEP staff had developed a pilot parent questionnaire, but the National Assessment Governing Board (NAGB), which oversees NAEP, had concerns about the questionnaire so it was never field-tested. Dr. Forgione spoke of the tension NAEP is facing as it works to pare down data collection and preserve the data that are really needed. Dr. Phillips mentioned that NCES recognizes it needs to put more time and money into developing background items, and plans to develop an alternative parent questionnaire. Mr. Jennings understood that the NAGB response to the pilot questionnaire may have been influenced by complaints the board heard, such as that NAEP collects background data that are not analyzed. His concern is that NAEP may be considering cutting information, just as researchers are finally analyzing the NAEP background data. Dr. Porter stated that it seemed clear that using NAEP for Title I evaluation could not work very well. But he added that this is not a big surprise, since NAEP was not designed to evaluate Title I. He questioned whether NCES should become involved in evaluating Title I due to the difficulty for NCES in terms of entering the interpretation business. Yet Dr. Porter also said that if ED were to say NAEP should not be used for Title I evaluation, there will be a very strong and vocal group questioning why not. Dr. Forgione concluded his comments by informing the panel that NAEP is moving to an annual schedule and that NCES is working to have NAEP items embedded in state assessments. Also, he distributed his most recent report to NAGB. Dr. Benjamin invited him to return to the November/December panel meeting for a longer session. State and Locally Reported Achievement Results and the Title I Federal/State Collaborative Susan Wilhelm and Ms. Bogart briefed the panel about ED's need to collect data on student outcomes and asked for the panel's assistance in selecting sources. Currently, Ms. Bogart said, the plan is to work with four pilot states (MD, KS, KY, and OR). However, it might be better to collect data from all 50 states, or data from some districts and individual schools. Additionally, the panel should consider what ED should collect and what other proxies (besides poverty) could be used for national reporting. Other questions to consider include what ratio of high-poverty to low-poverty schools is desirable, and how to analyze outcomes in relation to reform. Much of the discussion focused on how to interpret "adequate yearly progress" and what data from state assessments will help ED measure that. Dr. Fuhrman commented that current state assessment systems are not designed to measure adequate yearly progress in a way that an average person could understand. Additionally, the panel has spent considerable time thinking about how to look at student progress over time, yet the entire evaluation will depend on state assessment systems which are pretty much divorced from this. "I think that what we decide here about what outcome data we value has to speak to whatever measures the states use," she concluded. Dr. Johnson noted that framing the issue in "adequate yearly progress" terms implies a shift from looking at individual students to looking at the extent to which more students are meeting the standards. Dr. Ginsburg agreed, and noted the law calls for districts to look at successive cohorts and to collect data on changes in the proportions of students who are meeting certain proficiency levels. Dr. Baker observed that despite this required shift, the quality of most assessment systems is uncertain. She added that, down the road, most parents are going to focus on the "individual kid" report. "No matter how we start ... it ends with individual kids. We are in a developmental cycle in assessment that will run 8-10 years. We need breakthroughs to make the cost reasonable so that we can get to the point of annual measures on the same kid," she stated. The problem is to recommend characteristics for measures; this must be done in a well-thought-out and sensitive way. Dr. Johnson agreed with Dr. Baker and commented that he saw the move toward assessing individual students' progress in Texas, where four years ago there was testing at grades 4, 8, and 10. Now, it happens in grades 3-10, and the state has developed a metric for benchmarking scores and examining individual students' year-to-year progress toward a standard. Ms. LeTendre agree that teachers do not understand why the law tells them to look at the scores of different students from year to year. She also noted that 50 percent of Title I participants are in grades preK-3 and thus do not take achievement tests. She emphasized the need for a picture of whether the program is working nationally, using "information we can stand behind.... The appropriations committees want answers. The law does not give us the leverage to get the information that parents want." Ms. Bogart asked the panel to consider whether proxies should be used. She noted the examples of Texas and Kentucky, which report percentages of high-poverty students passing tests or reaching certain levels. She asked whether these are valid ways of looking at the data, or whether Title I participation would be a more appropriate measure than poverty. Dr. Johnson suggested using whatever indicators can show whether the gap is closing. He said we should move away from the idea of "Title I students," and focus more on the overall schools. He said, "Whether it's a targeted assistance school or not, the real issue is this--in this Title I school are students achieving at the expected levels? If they are doing so in greater numbers and greater percentages, that's an indicator of success. If they are not, it needs to be looked at." However, Ms. LeTendre was concerned that in targeted assistance schools, this is a troublesome way to measure the success of Title I because the program supports only limited services for a limited number of the school's students. Dr. Johnson responded that the question still has to be: Are we getting all kids in that school to meet the standards? This led to an intense conversation on attribution, with Dr. Porter asking, "Do we want to say whether Title I works?" If so, we would need data to show Title I as the causal agent. Yet, as Dr. Fuhrman pointed out, state reporting system are not set up to do this. Dr. Porter noted, "We know how difficult it is. When we design studies to attribute changes in achievement to a program, it's very hard to do." He added that another obstacle is that people are saying about reforms or programs, "If it's systemic then it is everything and you can't possibly attribute it to any one thing." Mr. Jennings stated that Title I administrators and teachers "feel naked" because there are no test data to defend the program. He suggested ED condition the public and Congress to accept a more complex answer, one that involves multiple measures of achievement, such as the data reported in Kentucky and Maryland, extrapolation from other data such as NAEP results, and reviews of individual school districts' data. Also, Mr. Jennings reiterated his belief that the reason ED is in this dilemma is that Title I is so flexible and cannot dictate what type of approach should be used. Ms. Montano asked if PES plans to develop state profiles that show what indicators states are using. Dr. Ginsburg said yes, and added that ED plans to use multiple indicators (NAEP, state assessments, etc.). Dr. Ginsburg asked the panel to consider what the goal of Title I is--is it to close the gap? to raise low-income kids' achievement? to make challenging standards apply to all kids? If people accept that as a goal, even if there is no one standard and many measures of success, the objective must be to move more kids up to challenging standards. If studies cannot show some success in the highest-poverty schools, then there is a problem, whether or not it is attributed to Title I. Dr. Porter agreed that there should be different measures and standards, but he reiterated that the attribution issue cannot be avoided. If studies find no growth, in the poorest schools, then they are saying that Title I has had no impact. That may be incorrect in some instances. At base what people want to know is "Is Title I worth it?" Dr. Johnson agreed and cautioned that ED needs to counter the belief of many people that high-poverty schools cannot work. ED's premise in supporting Title I is that high poverty schools can work, and this is the ultimate message that should come out of the longitudinal study, he said. If ED can't show progress toward closing the gap, Title I will be blamed. Ms. Baker suggested that the panel could develop patterns of growth or change one might expect under different conditions, and map the data available onto this. ED could look at data in the context of alternative expectations; for example, you would expect performance to go up if children stay in the same school and there is not a big influx of high-poverty students. She cautioned that, in the short run, high standards make the gap grow wider. To demonstrate the seriousness of this issue, Dr. Vinovskis asked the group, "What if ED kills Title I and it works? What if Title I works and the problem was somewhere else, such as in summer activities, and ED did not set up a design to get at that?" Ramsay Selden said the main evaluation question is: "Are states making progress in getting Title I students to meet the standards at the same rate as the other students?" "Adequate yearly progress" is an agreement between the state of Kentucky and ED, and between Kentucky and its LEAs. Ms. Bogart responded that yes, this is the framework, but it is not in place yet and ED needs something to report in the meantime. She asked whether ED should report on 50 different state measures and/or try to find common indicators. Dr. Selden responded that for four or five states that fit the desired model, ED can report what the law asks for. For the other states, ED has to present 45 different profiles of what is available, and recognize that every one of them may be different. Dr. Porter cautioned that if ED is able to establish such agreements, there might be difficulty in discerning whether any changes that appear can be related to the federal investment in Title I. For example, Kentucky has made a substantial investment in education reform, and could have reached that point on its own. It is unclear what ED is responsible for in evaluating Title I. Mr. Jennings agreed that ED has to take each state on its own terms. He said, "It's what states think that they have to do to raise achievement that truly matters." Dr. Ginsburg responded that the ultimate outcome is student learning and that ED needs some recognizable set of characteristics. Dr. O'Day stated that Title I was intended to leverage change that would eventually help students, and that the implementation studies must examine interim outcomes such as states' movement toward high standards. Dr. Baker added that Title I is not a program with defined attributes, and that the issue is figuring out how to identify the value added by "a catalytic source of funding." The panel then turned to a matrix of studies that PES distributed, and asked for the following additional components: organized by study and by issues, with a less linear approach. allow for the connections and linkages; the current one made it hard to see how the pieces fit. show who has contract and level of effort; and list the sample size and the states involved in each study. Presentations and Discussions of the Evaluations of Other Title I Program Areas Title I-Part B, Even Start Ms. LeTendre provided the panel with a brief description of the Even Start program and the key issues for its evaluation. She thanked Ms. Darling and Mr. Jennings for their support and involvement in Even Start, and noted that Even Start has received a great deal of federal interest and oversight. Even Start is the nation's largest federal investment in family literacy; its intention is to bring parents and children together to break the cycle of illiteracy and poverty. In this program, several agencies and organizations work together to integrate adult education, early childhood education, and parent-child education into one effort. Some of its key goals include: helping parents become full partners in their child's education, helping children reach their full potential, and providing literacy training. Even Start programs are evaluated annually, which helps direct attention to issues in the programs immediately and continually. She noted that the program encourages grantees to look at the family education levels of their children and consider total family literacy, and that she has witnessed how the involvement of parents in the education process changes parents' expectations for their children. In response to a query from Dr. O'Day, Ms. LeTendre responded that some of the family literacy programs are operated in primary languages other than English. Finally, she noted that ED has identified quality indicators which are shared with programs to benchmark where they are. Mr. Jennings observed that Ms. LeTendre's remarks reminded him of the way people talked about Title I in the 1960s. He described first the enthusiasm, then the pressure for accountability which led to the program becoming very rigid, and eventually the restoration of flexibility in the latest reauthorization of Title I. Also, the measure of program success continuously changed through the decades. "What I see with Even Start is the enthusiasm of a new program and a different structure--a glue program. My question is how can we guard against rigidity and have measures of success that say that things have gotten better?" he asked. He cautioned that ED pay close attention to its measures of success so that Even Start does not go through the same history as Title I. Ms. LeTendre agreed and noted that the program has multiple measures of success. Later, Mr. Jennings mentioned another concern: while Even Start has much current support in Congress, the program needs more objective criteria and agreed-upon measures that will allow ED to defend it over time. Ms. Darling added that the program is really gluing pieces together in marginal ends of the education arena. She said that improvements caused by the program are difficult to quantify and put in a report to Congress: positive changes in the home, in the child-parent relationship and the quality of their interactions. Similar to Mr. Jennings's concerns, Dr. Vinovskis indicated that he was nervous about the program. He said that early research found relatively little difference between the Even Start programs and non- Even Start programs. He questioned whether the money could be better spent on more Head Start funds, and asked whether the evaluation would examine different models to determine which work best. Ms. Darling responded that the evaluation did look at a range of models. Yet Mr. Jennings pointed out that even though ED can identify which models are more effective, the federal government cannot mandate the use of any one model. He asked the panel, how do you reconcile this tension? Presentation of Even Start Evaluation Fumiyo Tao of Fu Associates and Robert St. Pierre of Abt Associates briefed the panel on the national evaluations of Even Start. Fu Associates is currently conducting the second four-year evaluation; the first was 1990-1994 and the second runs from 1994-1998. The evaluation design consists of two major components: (1) a universal study in which data are collected on every program and presented in a descriptive report; and (2) an in-depth sample study of 57 programs that are evaluated in greater detail, with pre- and post-tests given to the children and parents. Ms. Tao presented data from the universal study, focusing on the following areas: program growth, geographic distribution, models used, hours of service (adult education, early childhood education, and parenting skills) provided, types of parenting activities offered, participation levels, major implementation challenges, key characteristics of participating families, retention rates, and reasons for leaving programs. Dr. St. Pierre presented data from the in-depth sample study. Some of his findings included: Using the Preschool Inventory, the study found that the Even Start group gained more than the control group and more than predictions based on normal development. Overall, Even Start participants had higher levels of GED attainment: 6 percent of those in the control group earned their GED during the study period, compared with 8 percent of those in all Even Start programs and 22 percent of those in the sample study. The study also indicated gains in parenting skills, based on a home environment measure. Dr. St. Pierre said the study had several implications for family literacy programs: the program's intensity matters; parenting education can affect children's learning gains; service location matters; project size and staff seem to be unrelated to gains; and the amount of time with parent and child receiving services simultaneously is important. Panel members had several questions and comments for the evaluators, including: Mr. Corwell asked if child care is part of the funded model. Ms. LeTendre responded that it is an allowable activity, and the projects vary in the child-care support services they provide. Dr. Ruiz asked if any data were collected on literacy levels in any language other than English. Dr. St. Pierre responded that they were not. Dr. Ruiz cautioned about interpreting literacy readiness findings because sometimes study participants are in homes that are literate in languages other than English. Because Title I in general and Even Start particularly works with so many Hispanic families, Dr. Baker suggested that measures of first language proficiency are important in every major evaluation that ED is going to be doing. It is important to try to get the best information. Dr. Ginsburg asked the evaluators if they provide feedback in the annual individual project profiles so that projects would know where they stand on some key developments. Dr. St. Pierre said they do not now, but could do so. Dr. Vinovskis reported that he was disappointed that there will be no control in next evaluation because a control is very important. Dr. Porter noted that important lessons can be learned from the early evaluations of Head Start, which showed that effectiveness varied according to the approach. A second lesson is that it is important to make an investment and acknowledge that ED is in it for the long run; recognize that we may have to "stick with it" 15 or 20 years before we see the results. Title I-Part C, Migrant Education Bayla White from OESE briefed the panel on migrant education. She said that migrant education programs have many questions but no database that will provide the answers. This is primarily because migrant children are largely invisible. While the program is thirty years old, it is still seen as one of the more innovative programs. These programs are really supplemental programs that extend the day in ways Title I has never done. The program coordinates services because the children and families need this, and has an incredible amount of communication across states, programs, and schools about children. She provided data on the demographic characteristics, services received, program appropriations, geographic distribution, and basic eligibility requirements. Ms. White explained to the group that funding for migrant students is not based on a census of migrant students, but rather is based on annual reports from the states. ED is beginning to ask states to provide two different unduplicated counts of eligible migrant students from September to August and a second unduplicated count of students who receive services during the summer and intersessions. Funds distribution is similar to the Title I formula, with funds based on state average per pupil expenditure. ED has drafted a set of performance indicators for migrant programs, and Ms. White explained these indicators include the following: (1) ensure that migrant children are reaching the same high standards set for all children; (2) increase coordination and integration of education and support programs to meet the needs of migrant children; 3) reduce administrative costs and increase direct services to students; (4) increase inter- and intra-state cooperation to improve educational continuity for migrant students; and (5) promote greater parental involvement of migrant families. Ms. White suggested that the panel could assist in designing a program and allocation procedure that reflects the different service costs for different models (school year vs. summer), as well as help determine the research needed to examine basic questions about demographics, mobility, and effects on children's education. Jeff Rodamar of PES said a study has been mandated that will examine the involvement of migrant students in schoolwide programs. He noted that reauthorization stipulations led to a change in the clientele served: only those who have moved in the past year are eligible now. He added that ED is in the midst of putting together a study of migrant students into wider programs to look at issues of access and limited English proficiency, international migrancy, linking programs, and coordination. Bill Strang of Westat informed the panel that the evaluation is in the design stage and will focus on answering the general congressional question--what happens to migrant children in schoolwide programs and why? Westat will conduct a telephone survey to match areas with migrant students and schoolwide projects. Also, the study will seek out programs that serve secondary students (difficult since few schoolwide programs are in high schools), and develop case studies of about 20 schools. The study will also explore summer programs. The key issues are similar to those for Title I overall; they include: planning and needs assessment; how needs of migrant students are distinguished from those of other students served in Title I; how plans involve parents generally; how data from state assessments are used; and how well plans are implemented. Mr. Strang acknowledged the difficulty of looking at implementation with data collection centered on surveys, and noted that the study will include some site visits. Dr. Johnson suggested that including migrant students in the longitudinal study would provide an opportunity to address many of Ms. White's concerns. Dr. Vinovskis agreed with Dr. Johnson and suggested that Ms. White try to have an oversampling of migrant children included in the longitudinal study. Ms. White responded that she was uncertain that this option would work, since migrant students are not there when data are typically gathered--e.g., they come to Texas in November and leave in March, so they are not there when schools collect data in May and September. "This is an important population, but to find out what is going on is going to take an investment," she stated. The data collection problems noted by Ms. White led to a broader discussion of the need for better databases. Dr. Baker pointed to a more general concern about the ability to follow children across district lines and state lines, and the need to get data infrastructures in place. When the discussion turned toward the topic of a national computerized database, Dr. Selden cautioned against such a concept. He predicted that while the public may accept their LEA developing a unit-record system containing a large amount of information, as soon as the system is discussed at a statewide or national level, concerns about abuse would be raised. Dr. Selden said, "What is emerging is a decentralized system that would have as its highest level a statewide system. What is needed are ways of making it possible for school systems to exchange data and use their respective data systems collaboratively." Dr. Baker agreed and noted that there a lot of data exchange will probably occur at school and district sites. She suggested that maybe CCSSO or NGA could figure out a process for managing this. The panel postponed (indefinitely) its discussion of the evaluation of Title I Part D, Services for Neglected or Delinquent Students. Discussion on Subgroup Activities and Other Issues During the final discussion the panel reviewed the matrices developed by PES staff on the various studies and their characteristics. Panel members agreed to send in information about their own studies that correspond in some way to those listed. Dr. Fuhrman stressed the importance of documenting whether the samples overlap. Dr. Vinovskis stressed the importance of having an understanding of what the final report will look like and who will be responsible for writing it, as well as a timeline. Dr. Ruiz commented that in the Processes matrix, Coherence Across Processes should be its own objective. Dr. Ginsburg asked the panel what issues should be examined at the next meeting, and he received tacit approval on the following: national education goals, safe and drug-free schools, violence, waivers. He explained the panel's charge is to look at these programs to determine their overall effect on children's readiness for school and other results. Dr. O'Day suggested that PES staff could distribute briefing memos or papers to the panel on other programs (Neglected/Delinquent, Eisenhower, School-to-Work, etc.), rather than have presentations during the panel meetings. The panel discussed the types of subcommittees that should be formed and agreed that three subgroups were needed in the short term to focus on: (1) assessment/outcomes issues; (2) implementation issues; and (3) the "big picture" (identify what holes exist, provide a strategic view). Dr. Fuhrman suggested that the panel should begin work immediately on the implementation and assessment/outcomes subcommittee because studies are starting. Dr. Vinovskis also stressed that the panel should immediately start to think through the longitudinal study and determine what should be discussed if the panel is going to make a contribution to it. Dr. Plisko reminded the panel that all of the big studies have expert panels and that IRP members can serve as members of those expert panels and report back to the larger group. Meeting Adjourned Summary of the Title I/ESEA Independent Review Panel Meeting March 7-8, 1996 Independent Review Panel (IRP) Members Present: Eva Baker, University of California at Los Angeles Joyce Benjamin, Oregon Department of Education Rolf Blank, Council of Chief State School Officers Christopher Cross, Council for Basic Education Sharon Darling, National Center for Family Literacy Susan Fuhrman, University of Pennsylvania, Consortium for Policy Research in Education Dan Gutmore, (Designee for Beverly Hall), Newark City Schools Jack Jennings, Center on National Education Policy Joseph Johnson, University of Texas at Austin Phyllis McClure, Independent Consultant, Washington, DC Jessie Montano, Minnesota Department of Education Jennifer O'Day, Stanford University Mary O'Dwyer, Oyler School, Cincinnati, OH Linda Rodriguez, Pasco County School Board Richard Ruiz, University of Arizona Ramsay Selden, Education Statistical Services Institute Maris Vinovskis, University of Michigan U.S. Department of Education Staff Present: Joanne Bogart, Planning and Evaluation Service (PES) Barbara Coates, PES Alan Ginsburg, Director, PES Robert Glenn, PES Mary Jean LeTendre, Director, Compensatory Education Programs (CEP) Oliver Moles, OERI Valena Plisko, PES Sue Ross, PES Elois Scott, PES Stephanie Stullich, PES Susan Wilhelm, CEP Meeting Summary The following is a summary of the discussions held at the fifth meeting of the Independent Review Panel (IRP) on March 7-8, 1996. The summary is organized around the key issues on the meeting agenda, including: (1) a brief discussion regarding the interim report, panel statement and congressional hearing (2) the design for the Longitudinal Evaluation of School Change and Performance; and (3) a discussion of the panel's mandate under ESEA Sec. 14701 and other data collection efforts. Discussion regarding the interim report, panel statement and congressional hearing Christopher Cross, the chairperson of the IRP, briefed the panel on the Congressional hearing before the Senate Subcommittee on Education, Arts, and the Humanities, held on February 20, 1996, to discuss ED's Interim Report on the National Assessment of Title I and the panel's independent statement to Congress. Chairperson Cross noted that in his testimony to Congress, he stressed the two main points in the panel's independent statement, those being that: (1) it is unrealistic to determine the law's success based on the outcomes of nationwide student achievement in 1996 or 1997, when the law will only just have taken effect; and (2) the National Assessment lacks the money to fulfill the Congressional mandate. In addition, Chairperson Cross urged Congress not only to revise and extend the timeline but also to provide adequate funds for the National Assessment. He said that the members of Congress present at the hearing appeared receptive to these concerns. Alan Ginsburg of ED's Planning and Evaluation Service (PES) also shared his thoughts on the hearing, noting that the hearing is an important signal of the new emphasis being placed on program evaluation. He said that during the last ESEA reauthorization cycle, policymakers geared evaluation efforts toward the reauthorization of the legislation; however, they are now realizing that evaluating programs before the end of the reauthorization period and using evaluation data to guide program improvement are essential. "I can't remember a time when people were more focused on evaluation results," he said. Finally, panel members took a moment to praise ED's work on the Interim Report. Above all, members said they were impressed by the proactive tone of the report. Design for the Longitudinal Evaluation of School Change and Performance Design Overview and Summary of Data Group Meeting Joy Frechtling of Westat and Ramsay Selden, a panel member, briefed the panel on the Data Subcommittee Meeting, held on February 13, 1996. The IRP formed the Data Subcommittee to explore design and data collection options for the longitudinal evaluation of school change and performance. Mr. Selden and Ms. Frechtling informed the panel that the subcommittee agreed that the study should: Address the implementation of standards-based reform in general, and not attribute changes in schools and students to Title I. The group agreed on this focus because Title I is consistent with the standards-based approach and because the study's short timeframe does not permit isolating Title I effects. Nevertheless, the study should still examine patterns of relationships between the implementation of Title I and student achievement. Sample states on both ends of the systemic reform continuum. That is, the study should look at states that have in place challenging content standards that are aligned with state assessments and states that do not. Depending on the design, the number of states included in the study could range from four to eight. Consider factors, in addition to standards and assessments, that might affect state education reform. That is, the study should consider the entire state education reform context, including such variables as the size and composition of the student population. Examine school reform at the district level. Once states are selected, the study contractors will network with experts in the field to identify candidate districts. Depending on the design, the number of districts per state could range from two to four. Concentrate on high-poverty schools (those serving 50 percent or more students from low- income families) and elementary schools, but still examine some low poverty schools (35 to 50 percent free/reduced price lunch). Collect student achievement data that are consistent across states and districts. For example, the subcommittee recommended the possible adaptation of NAEP items to the study. After the briefing, panel members raised several questions and concerns, including: The need to clearly define terms such as "high standards" and "alignment." Because such terms are often interpreted differently, the study should specify the criteria used to distinguish between high-reform and low-reform states and districts. The need to delve deeply, to go beyond reform rhetoric. As one member noted, simply because certain structural elements are in place (standards, assessments, professional development), does not mean that reform is actually taking place in schools. What matters most of all is whether reforms are reaching the classroom and whether teachers are employing high standards for all students. The need for the longitudinal study to fulfill Congress' mandate. Based on the legislation mandating the study, some Congressional members may expect the longitudinal evaluation to be an attribution study; if the study is not, Congress should be told of the change in design and the rationale for this change. Similarly, if the study fails to attribute student gains to Title I, Congress might be reluctant to continue funding the program. The problem of using a common student achievement assessment (such as NAEP) across states, given the fact that such an assessment may not be aligned with individual state content and performance standards. This problem was not resolved, but panel members agreed that common assessments would be used to supplement--and not supplant--assessments specific to individual states. The potential mismatch between the design of the longitudinal study and the spirit of Title I. One member felt the new Title I legislation is an attempt to improve the capacity of schools, so that schools can help all students high standards. Thus, instead of tracking individual students, the study should focus on whether school capacity is improving. In response to this concern, the panel agreed that the study should track both students and schools. The opportunities for collaboration. Panel member Susan Fuhrman announced that her organization, CPRE, hopes to undertake a 12-state study focused on the intersection of state, district, and local reforms. Panel members agreed that if this study is conducted, ED and its study contractors should work with CPRE to try to collaborate and combine study resources to the greatest extent possible. Student Assessment Options Joanne Bogart of ED said that the Department is considering a variety of student assessment options to measure student achievement in the longitudinal study. Two of these options are the TerraNova Student Assessment, a commercial test published by CTB/McGraw Hill, and the National Assessment of Education Progress (NAEP). Val Plisko of ED explained that if NAEP were used as part of the longitudinal evaluation, ED would not adopt the test wholesale but would probably draw from a pool of NAEP items and adapt these items for the longitudinal study. TerraNova Representatives from CTB/McGraw Hill briefed the panel on the company's newest assessment series, TerraNova. The assessment system contains five major components: (1) Surveys; (2) Batteries; (3) Multiple Assessments; (4) Performance Assessments; and (5) a Customized component. The system also tests students in four content areas: (1) reading; (2) math; (3) science; and (4) social studies. Representatives said that the third component, Multiple Assessments, would be of most interest to the panel and to ED. This component is composed of two types of questions: (1) selected response items and (2) short constructed response items. The Multiple Assessments edition merges these two types of questions to strike a balance between the breadth and depth of student responses. Students who take the tests are ranked on a scale from one to five, with five signaling advanced achievement. Performance standards (or cutoff points) will be set this spring and will involve input from a range of education experts. According to CTB/McGraw Hill representatives, some of the strengths of the TerraNova Assessment are its capacity to: Provide reliable, high-quality, and norm-referenced data on student achievement. Establish a link to CTBS/4, an assessment used by ED in the past. Establish a link to national content standards. The NCTM Standards support the mathematics test; the reading/language arts test is based on an analysis of state curriculum guides, the most current draft of the English/Language Arts standards (produced by the International Reading Association and the National Council of Teachers of English), and the conceptual framework of NAEP. Establish a link to state- and local-level databases, which will promote coordination between Title I and state data collection efforts. Kentucky and Maryland have signed on to use CTBS-5. Emphasize higher-order thinking skills. Organized around themes, tests require high-level analysis, inference, and synthesis. Emphasize equity. The test presents diverse problem-solving situations and features engaging topics that appeal to many different students. Be customized by mixing and matching the five component pieces at different grades or by adding a custom supplemental state module tailored to specific state frameworks. Use imaging technology, which allows a permanent record of students' work, promotes efficient management of information, and helps control costs. In response to this presentation, panel members raised several questions and concerns, regarding the following: How to determine the degree of alignment between TerraNova performance standards and state and district standards. A TerraNova representative explained that although states and districts can modify TerraNova's performance standards, and although not every case will be a perfect fit, he expects a fair number of cases for which there will be "reasonable alignment." Why only TerraNova representatives were invited to the IRP meeting. In response to this question, Val Plisko said that ED was interested in TerraNova because of its link to national content standards, testing in the early grades, and proficiency levels; however, ED has in no way ruled out other assessment options. Why so many states are returning to norm-referenced tests. Most panel members agreed that this shift has been spurred by widespread pressure from parents and politicians for accountability and for additional evidence of student achievement. As one panel member said, "In Kentucky, (which will administer CTBS-5) they weren't sure they could defend their system on the basis of performance assessments alone. The problem now is that by trying to satisfy competing constituencies, the state is sending out mixed signals." The Use of NAEP for National and State Trend Analysis and Potential for the Longitudinal Evaluation of School Change and Performance Arnold Goldstein and Steve Gorman briefed the panel on the potential uses of NAEP as a student assessment measure and on the possibility of using NAEP trend data to inform ED's longitudinal evaluation. Goldstein and Gorman said that the benefits of using NAEP (or at least some pool of NAEP items) include the fact that NAEP is standards-based and consistent. Also, the main NAEP exam has recently begun to collect Title I-specific data, a feature that will allow NAEP to show differences in achievement between Title I and non-Title I students. One of the disadvantages of NAEP, however, is that student- and school-level results are confidential; thus, it would be impossible to use NAEP for a longitudinal evaluation. In terms of trends, NAEP results in reading and math show a consistent pattern of "disadvantaged urban" students performing at levels significantly below their "advantaged urban" and "extreme rural" cohorts. Mr. Goldstein predicted no significant changes in future trends; that is, the scores of disadvantaged urban students will probably continue to remain below those of their cohorts. In response to this presentation, panel members raised several questions and concerns, including: What were the explanations for certain trends in student achievement, as reflected in the trend data presented by Goldstein and Gorman? As one panel member commented, test results are worthless unless there are explanations for why scores go up and down. In response to this comment, Arnold Goldstein said that it is not NAEP's role to provide explanations and that he will leave it up to others to interpret results. In their attempt to do so, panel members touched on a range of possible explanations, including changes in curriculum, increases in poverty, and the possibility that tests are culturally insensitive. However, the panel agreed that they could offer no definitive explanations. One panel member suggested that researchers get as close to the classroom as possible in their quest for explanations; in other words, are teachers actually teaching what NAEP is measuring? What was the likelihood of misalignment between NAEP and state content standards? Echoing a familiar concern, one panel member questioned the value of using NAEP as a student assessment option when the new Title I legislation is specifically asking states to develop their own content standards. Put another way, what would be the rationale for using NAEP if students are not being taught what NAEP measures? Another member added that this question not only applies to NAEP, but virtually to any national assessment. Given the numerous questions raised by the discussion of the data group meeting and the TerraNova and NAEP presentations, the panel decided to form a special subcommittee that would explore in detail these student assessment issues. The Subcommittee will meet on March 28, in Washington, to discuss the two views debated at the meeting: 1. That a national standardized test should be administered to all students in the sample of Title I elementary schools. Students across the sample would all be measured on the same instrument, thereby providing uniformity of measurement. 2. That an evaluation of a selected number of states and districts representing different levels of implementation of standards-based reform under Title I and Goals 2000 cannot be done by a nationally normed test. A Discussion of the Panel's Mandate under ESEA Sec. 14701 and Other Data Collection Efforts The Use of State Assessment Data for Title I Ed Roeber from the Council of Chief State School Officers (CCSSO) briefed the panel on: the challenges of using state assessment data for Title I; current state student assessment activities; and ways to improve state responses to the new legislation. With the context set, three members from the panel--Joyce Benjamin, Jessie Montano, and Joe Johnson--described assessment activities in their own states and commented on their states' capacity to contribute to an evaluation of Title I. According to Mr. Roeber, some of the major challenges in using state assessment data include: How to know that content standards set by the states are rigorous, attainable, and important How to create assessments that are tied to these standards that help schools teach all students How to create assessments that are innovative, cost-effective, and technically sound In addition to these challenges, Mr. Roeber touched on others including the difficulty of: (1) implementing multiple assessments; (2) defining adequate yearly progress; (3) reporting assessment results in ways that support the improvement of schools; (4) including LEP students in testing programs; and (5) encouraging states to move forward with challenging content standards in the wake of staff cutbacks and decreasing financial resources. To demonstrate states' current activities, Mr. Roeber distributed a chart summarizing the amount and types of assessments implemented by states during the 1994-95 school year. For example, according to the chart, 18 states conducted some kind of performance assessment in 1994-95 (down from 25 the previous year). For more extensive data, Mr. Roeber told the panel to look for a report to be released by CCSSO in April; in part, the report examines the status of state content and performance standards, paying particular attention to how states define these terms. Roeber also said that when state assessment directors were asked about Title I evaluation plans, the common response was a "blank response" or a referral to the Title I director (who often knew just as little). Thus, although many states have assessment efforts underway, these efforts are usually not attributable to Title I. In terms of state efforts to develop coordinated systems of assessment, Mr. Roeber believes that states currently working on assessments will experience less difficulty than those that developed assessments three or four years ago. To improve state responses to ESEA, Roeber said that CCSSO is developing networks of states (and school districts) to work collaboratively on the issues and challenges posed by the new legislation. For example, through its Title I Technical Assistance Project, CCSSO is working to assist all states to develop Title I programs that will implement the intent of ESEA and Goals 2000. And through the State Collaborative on Assessment and Student Standards (SCASS), CCSSO is operating several projects on the behalf of states. These include assessment consortia, in which state curriculum and assessment specialists, content experts, and others develop state consensus frameworks and prototype exercises in various content areas. They also include development projects, in which states work together to develop needed assessment standards and measures. In closing, Roeber said that he sees opportunity for collaboration between the states and ED. States are interested in collaboration for several reasons: (1) they are concerned about the long-term viability of Title I; (2) they recognize the need for interim data to help make program decisions; and (3) they want a model of assessment/evaluation that can guide the development of their own assessment systems. He said the next step is for ED to clearly specify its data requirements and needs; this way, CCSSO and other groups can work with ED to identify states that have the capacity to fulfill those needs. In response to Mr. Roeber's presentation, panel members raised a few questions and concerns, including the desire to know the number of states that: Are in a position to meet the requirements set out by ESEA. Mr. Roeber said he cannot provide a number, as states continue to struggle over terminology and the question of what constitutes an element of reform. Given the spirit of the law, however, there are probably no states that currently have all the elements of reform in place. At the same time, some states, such as Kentucky, are closer to meeting the spirit of the law than others. Consider their assessments to be aligned with standards, based on the CCSSO survey. In response to this question, Ramsay Selden said that roughly one-fourth of states think they have aligned assessments, whereas most of the rest say they are working on it. Panel Member Reports on State Assessments Joyce Benjamin said that Oregon has built a statewide assessment over the last six years. The system began by testing kids in reading, writing, and math, but is now expanding to include history, civics, geography, and economics. In response to some points raised by Mr. Roeber and panel members earlier, Ms. Benjamin said that student assessments are based on content standards and that these content standards do indeed differ from the state's curriculum framework. Still, some of the challenges facing Oregon include identifying the most important things that students should know and be able to do and deciding on the appropriate amount of testing. She added that Oregon does report results by school and thus can distinguish between Title I and non-Title I schools. Jessie Montano said that Minnesota's standards-based reform efforts are proceeding. Minnesota has two sets of standards: (1) Safety Net standards, which are a set of minimum requirements that all eighth graders must meet to graduate and (2) Profile of Learning standards, which are a set of 64 high standards in ten interdisciplinary areas. Minnesota is now in the process of developing assessments that align with these standards. Ms. Montano said that Minnesota has had some success using Goals 2000 and Title I as an incentive to develop assessments. Moreover, the Title I director has worked closely with state assessment people to ensure that the test results of Title I students can be extrapolated. Montano concluded that despite some resistance from teachers at the local level, the drive for assessments continues to move forward as state politicians, the business community, and the public recognize the need for some system of accountability. Joe Johnson of Texas recounted the manner in which the state developed state assessments. Mr. Johnson said that in theory, a state should first develop content standards, and from these content standards, performance standards and then assessments should logically evolve. Texas, however, started with Essential Elements (a much looser notion than content standards) and then moved into assessments, which were developed at the same time as performance standards. Now, the state is revisiting its Essential Elements to develop a set of knowledge and skills that more closely resembles the notion of content standards. Thus, the future question is the extent to which there will be alignment between content standards and assessments, and the extent to which schools will tolerate changes in the assessment system to enhance alignment. Mr. Johnson further explained that although the system is supposed to provide assessments to all students, many children, such as LEP and disabled students, are being exempted from testing. On the other hand, some positive features of the system include the fact that all assessment results are included in a public information system; this gives the state tremendous capacity to disaggregate data in many different ways. In addition, the assessment system is part of the state accountability system, which not only judges schools on how well they do in general, but also on the performance of certain subgroups (e.g., Hispanic students, low-income students) within schools. The state uses the same accountability system for Title I and makes no attempt to differentiate. In response to these presentations, panel members raised several questions and concerns, including: The need for a model of alignment. Responding to the definitional ambiguity affecting states, one panel member said that policymakers must encourage states to think hard about what alignment really means. Specifically, the notion of alignment must directly influence what happens in the classroom and not just imply a superficial matching between standards and assessments. Another member echoed this concern: he said that in addition to alignment being a horizontal, state-level concern, the more important issue may be how standards, assessments, and curriculum frameworks connect with classroom teachers. The extent to which ED has developed guidance materials to help states work through issues and challenges. In response to this question, Mary Jean LeTendre of ED said that the Department is in the process of developing guidance on a range of topics, including examples of standards and assessments. Ms. LeTendre added that ED also hopes to receive input from education experts on many of these issues. Study of Special Strategies Studies for Educating Disadvantaged Children Sam Stringfield of John Hopkins University presented an overview of the purpose, design, and findings of the completed Special Strategies Studies for Educating Disadvantaged Children, prepared under contract for ED. Originally planned to accompany the Prospects study, Special Strategies was designed to: (1) describe promising approaches that would be supported by Chapter 1; (2) compare the characteristics of promising alternatives with those of more traditional approaches to Chapter 1; and (3) assess the replicability of programs that appear most successful. To tackle these questions, the study examined ten types of special strategies in urban and rural/suburban sites across the country: James Comer's School Development Program, Success for All, Paideia, Coalition of Essential Schools, Schoolwide Projects (SWPs), Extended Year SWPs, Reading Recovery, Computer Curriculum Corporation, Extended Time, and Metra/Peer Tutoring. The ten program types were categorized according to a two-by-two matrix, with the source of the program or strategy (external or local) on one axis and the scope of the program or strategy (targeted subpopulation or whole-school restructuring) on the other. The study collected data at various levels and from various sources, including detailed observational data on classroom processes. The study drew eight major findings, including the following: American students placed at risk of academic failure are capable of achieving at levels that meet national averages Schools used federal compensatory education funds to create or adopt, and then to sustain new programs they could not have considered otherwise Each of the programs studied in Special Strategies offered clear strengths; however, none, if adopted in name only, significantly improved students' academic performance The Special Strategies schools making the greatest academic gains emphasized implementation and institutionalizing initial and long-term reforms In response to this overview, panel members raised several questions and concerns, including: What was the connection of this study to changes in the law that support systemic reform? Many of the special strategies featured in this study take a bottom-up, school-by-school approach to reform, how are changes in federal policy to influence these programs? In subsequent discussions, panel members agreed that the federal shift to standards-based reform has changed the incentive structure, motivating districts and schools to search for and adopt innovative solutions to local problems. In line with this idea, one panel member suggested that there is not only a need to look at schools that have adopted innovative reforms, but also need to look at schools that have been historically reluctant to engage in reform. This way, one can determine whether changes in federal policy are indeed spurring changes in the attitudes and behavior of schools. The need to identify the characteristics of successful schools. The findings of the Special Strategies study reinforce the idea that some schools are doing well while others continue to lag behind. In schools where students are achieving, what factors are influencing success? Joe Johnson, a panel member, said that his center at the University of Texas at Austin has been funded to conduct a study that will attempt to answer that question. The need to disseminate the findings of this study to private foundations, which often support innovative programs without evaluating how school context influences program effectiveness. Panel Priorities Regarding the ESEA Mandate for a National Assessment of Elementary and Secondary Programs Panel members briefly discussed their priorities as members of a panel charged with the task of assessing the effectiveness and impact of federal elementary and secondary education programs in general, and not just Title I. A panel member noted that although the IRP cannot look at everything, they should at least examine the major programs, such as Head Start, Goals 2000, and School-to-Work. Another member added that panel members are uniquely positioned to step back, put programs in perspective, and criticize the system in ways that ED and other federal departments cannot. Given the magnitude of this challenge, panel members concluded that they needed more time to think about what programs and issues should take priority. They agreed to send their concerns in writing to Chris Cross by April 1, as well as any information on research efforts that might address these concerns. Chairperson Cross would compile the comments, organize them, and share them with ED. At the next meeting, Cross would also reserve time on the agenda to discuss these issues in detail. State and Federal ESEA and Goals 2000 Implementation Studies Brenda Turnbull of PSA and Jane Hannaway of the Urban Institute briefed the panel on two studies regarding the implementation of federal education programs. Because the studies are closely related, PSA and the Urban Institute intend to coordinate research and data collection efforts. The PSA study (also being conducted in conjunction with CPRE) will focus primarily on the implementation of federal programs in the context of state reform efforts. The study will determine the extent and ways in which state agencies are advancing their own educational improvement strategies through the programmatic mechanisms of consolidated planning, development of standards and aligned assessments, program monitoring, waiver applications, funds targeting, technical assistance, support for local professional development, and the collection and use of data on student performance. The analyses will draw on surveys of program managers and other state-level leaders in all 50 states, and on site visits to classrooms, schools, and district offices in 12 districts within four states. The Urban Institute study will focus on two questions: (1) What is the federal government doing to promote the implementation of federally supported reform efforts at the state, local, and school level? and (2) How are these efforts being received by education personnel in the field? For instance, the study will examine the extent to which districts understand the elements of reform embedded in Goals 2000 and Title I, and will determine where districts turn for information and guidance. The results of the study should help ED understand where confusion exists and how ED staff can improve communications with and the targeting of technical assistance to states, districts, and schools. In response to the Urban Institute presentation, panel members raised several questions and concerns, including: The extent to which there will be a state effect on the way federal information is filtered to districts. Panel members generally agreed that there will be a large state effect and that the study should account for this fact, whether states are selected randomly or purposefully. The potential discrepancy between what people say they know and what they actually know about federal reforms. In other words, people in the field might indicate satisfaction with the information they are receiving but in fact know very little about reform provisions. In response to this concern, Jane Hannaway suggested that the survey would avoid "feel good" questions; she also said that she would be wary of survey results indicating a complete understanding of federal efforts. The possibility of the study examining the way people interpret the information they receive. In other words, although people in the field might be able to "mouth" the right words, the study should try to capture how and the extent to which people understand and apply the messages they receive. The possibility of collaborating with other studies. One panel member said that the Urban Institute study ties in nicely with the proposed CPRE 12-state study and that the two studies should try to enhance each other wherever feasible. Another member added that, from a practical point of view, coordination is important so that districts are not bombarded by many different studies and surveys. Study of Barriers to Parent Involvement Barbara Coates and Oliver Moles briefed the panel on a study designed to identify common barriers to effective parent involvement in schools. The study, mandated by Congress, will also report on successful state and local policies and programs that have overcome these barriers and improved parent involvement and the performance of participating children. Information sources include a Fast Response Survey (FRS) of public schools, to be sent to principals; a CCSSO survey on policies and activities of states to encourage local family involvement efforts; existing case study material, research findings and analyses, and program descriptions of both elementary and secondary schools. The report is due to Congress in December 1996. An Idea Book for educational practitioners and policymakers will result from the findings. In response to this presentation, panel members raised several questions and concerns, including: The need for a broader conceptualization of parent involvement. One panel member warned ED not to view parents coming to the school as the only means of building home/school relationships. Rather, ED should recognize that there are many different ways to foster these partnerships, such as by understanding home learning. Building on this comment, another panel member added that educators too often focus on how to change parents while failing to consider how schools can change to accommodate the strengths of parents. A third member summed up some of these concerns when she added that as a parent, "Sometimes I don't want to be fixed." Why the study is only surveying principals, and not parents. Many panel members felt that the study would lack credibility if it did not survey parents on the barriers they face when they become involved in schools. In response to this concern, Oliver Moles said ED does not have the time or money to mount a new survey; however, it does intend to use other data sources besides the Fast Response Survey to collect more in-depth information on these questions. These data sources might include interviews with parents and students. The possibility of tapping into the expectations parents have for their children. One panel member argued that children often fail in school because their parents do not expect them to succeed. Thus, looking at what has been done to raise parent expectations is essential. The same panel member recognized that he did not know how this question could be addressed in a two-page survey. In response to the concerns raised by IRP members, the Study of Barriers to Parental Involvement has been revised to include: (1) parent representatives on the panel of consultants advising on study design, implementation, and reporting; (2) focus groups of parents. As planned, the study will also include an analysis of 1996 National Household Education Survey data collected on parents and their involvement in schools. Study of Title I Intra-District Targeting Stephanie Stullich of PES outlined a study designed to examine the targeting of Title I funds at the school level. Some key issues to be addressed by the study include: (1) ways that poverty data are used to determine eligibility; (2) the level of funding for middle schools and high schools; (3) the level of funding in schoolwide programs compared with targeted assistance schools; (4) the extent to which waivers are used to provide Title I funds to schools that would not otherwise be eligible; and 5) the effects of any reductions in Title I funding. The study has two main components: a national analysis and an in-depth analysis. For the national analysis, the study will use data from the Schools and Staffing Survey (SASS) for 1993-94 and from NAEP for 1995-96 to provide a national estimate of any changes in the number of schools served at different poverty and grade levels. For the in-depth analysis, the study will analyze existing local education agency (LEA) records of Title I school selection and allocations for 1994-95, 1995-96, and 1996-97 to examine changes in how districts allocate funds and the effects of individual targeting provisions. Originally, the study planned to examine LEA records for the 50 largest school districts in the country, but Ms. Stullich said PES is now reconsidering this idea, given the practical difficulty of manually entering data for about 15,000 schools. She said that the number of large districts included in the study will probably depend on the number of districts that can provide data on diskette. Because the sample will not be nationally representative, PES is also considering concentrating on states that can provide complete and accurate data. In response to this presentation, panel members raised several questions and concerns, including whether: The study should examine the amount of funds retained at the district level. Ms. Stullich said that the study would try to obtain this data. This study should address the argument that as Title I schools adopt schoolwide programs, students most at risk of school failure will lose services. In response, Ms. Stullich explained that this study would not be able to address that issue, since it is not designed to look at resource allocations within schools. This study should continue beyond FY 1996-97. In response, Ms. Stullich said that ED plans to continue to examine this issues in subsequent years using NAEP and SASS data. The analysis of waivers should also look at waivers granted by ED-Flex states. Ms. Stullich said ED will consider whether and how this could be addressed through this particular study. Both small and large school districts should be sampled because a comparison will probably reveal substantial differences in the way resources are allocated among districts of varying size. Ms. Stullich said that the study will collect data for districts of different sizes and poverty levels. The study should also look at the connection between the allocation and targeting of Title I and state compensatory education funds. Ms. Stullich said that the study will try to obtain data on state comp. ed. funding in Title I and non-Title I schools. Study of Title I Services to Private School Students Robert Glenn of PES and Bruce Haslam of PSA briefed the panel on a study designed to examine the impact of changes in requirements for serving private-school students under IASA on the participation of private-school students in Title I. Dr. Haslam began the presentation by sharing some background information: Based on data from the 1990-91 school year, private school students represent about three percent of all children served by Title I, or approximately 175,000 students in more than 2,000 districts across the country. Almost all of these students attend religious schools, many of which are located in large school districts. According to the Supreme Court decision in the Felton case, private-school students eligible for Title I must receive services in "religiously neutral sites." In practice, this has meant that private school students have received services in a variety of ways, including: (1) directly from Title I staff in portable classrooms or other "neutral sites;" (2) through computer assisted instruction; and/or (3) through third-party contractors. Initial implementation of the new requirements also resulted in a sharp decline in participation by religious-school students. According to Dr. Haslam, the passage of IASA has resulted in two substantial changes in the requirements for serving private school students. The first change relates to the provision that funds for services be allocated in proportion to the number of students from high-poverty families. This change has required many districts to find new ways of working with private schools to obtain poverty data. The second change is in consultation requirements; school districts are now required to consult with private school representatives on more issues and at several different points in planning and implementing services to private-school participants. The project will focus on the implementation of these new provisions in districts, and the effect these provisions have had on the participation of private-school students. The project consists of two parts. The first part, an issue brief, will focus on such topics as: (1) national and district participation patterns and the impact of court decisions; (2) strategies and effectiveness of consultation; (3) allocation of resources for services to private school students. In preparing this issue brief, PSA will rely on documents both from within ED (e.g., policy guidance materials) and from outside of ED (e.g., research reports, legal briefs, documents from private schools). PSA will also conduct interviews with ED staff and with individuals from the private school community. The second part is a 12-15 item survey that will be administered to Title I directors in a nationally representative sample of districts. The survey might address such topics as: 1) changes in participation; 2) strategies and data used to allocate resources 3) strategies and data used to identify eligible students; 4) the amount, timing, and content of consultation with private school representatives; and 5) type(s) and amount of services provided. In response to this presentation, panel members raised several questions and concern including: The importance of surveying private school officials, in addition to surveying district Title I directors. Several panel members felt the study would lack credibility if it relied exclusively on the responses of Title I directors and did not also attempt to gauge the responses of private school officials to changes in Title I legislation. In response to this concern, Val Plisko said that a lack of resources would make it difficult to develop a second survey. To solve this dilemma, a panel member suggested that the survey go through the district administrators to obtain responses from the private school representatives. Another member added that this process could be facilitated by convening a meeting with Catholic education groups to inform them of the study and its purpose. The possibility of this study addressing issues of student achievement. Dr. Haslam said that although districts will probably be unable to provide much information on student achievement, the study will investigate the measurement systems that are in place. The possibility of this study, or some other study, focusing on the enormous growth in the private contracting of Title I services. This option applies not just to private schools, but to all schools. In response to the concerns raised by IRP members, ED is considering a survey of private school representatives to complement the survey of district Title I directors. The sample would include representatives of private school organizations in the same districts included in the survey of Title I directors. The number of districts would be reduced from 400 to 200 to maintain costs. Meetings with private school groups are planned to inform them of the survey, to gauge their interests in its contents, and to gain their support to increase the survey response rate from the private school community. Evaluation of the Title II Professional Development Program Val Plisko of PES briefed the panel on a study designed to evaluate the contribution of the Eisenhower Professional Development Program (Title II) to comprehensive education reform. The evaluation will use ED's Eisenhower Professional Development Program performance indicators as a framework, addressing the following objectives: (1) quality of professional development; (2) targeting; (3) teacher learning outcomes; (4) classroom practice outcomes; (5) student interest and learning outcomes; (6) capacity of schools to sustain a community of learners; and 7) effectiveness of program management and coordination at the federal, state, and local levels. By using this framework, this study will most likely be able to link itself with other studies that use similar performance indicators. Ms. Plisko said that the general design has not yet been developed but that ED hopes to collaborate with past and current studies, wherever possible. The evaluation contract will be awarded by the end of FY 1996 through open competition. In response to this presentation, panel members raised several questions and concerns, including: The need to view professional development not as a separate program, but as an integrated part of the life of the school. Almost all panel members agreed that educators need to conceptualize professional development broadly to encompass informal as well as formal growth activities. Similarly, any study of professional development should reflect this new conceptualization, or else it runs the risk of perpetuating an outdated paradigm. One panel member said that an evaluation based on this new model of professional development could begin by looking at changes in teaching practice, and then working backwards to identify the programs and experiences that support those changes. The importance of focusing on the interplay of Title II with other federal programs, given that professional development should be viewed as an integrated part of school improvement. For example, many states use Goals 2000 money to support professional development; therefore, in what ways are Title II funds being used to support the creation of schoolwide programs under Title I? The need to track the use of Title II funding allocations, given that professional development funds are often consumed by district personnel at the expense of schools. The extent to which paraprofessionals have access to professional development opportunities, given that paraprofessionals are often responsible for the education of LEP students. Other Agenda Items The next meeting of the IRP will be held on August 8-9, 1996. Meanwhile, Panel members should sign up to serve on subgroups which will meet in the interim. Summary of the Title I/ESEA Independent Review Panel Meeting September 14-15, 1995 Independent Review Panel (IRP) Members Present: Marilyn Aklin, National Coalition of Chapter 1/Title I Parents Anthony Bryk, University of Chicago David Cohen, University of Michigan George Corwell, New Jersey Catholic Conference Christopher Cross, Council for Basic Education Sharon Darling, National Center for Family Literacy Joyce Epstein, Johns Hopkins University Susan Fuhrman, University of Pennsylvania, Consortium for Policy Research in Education Jack Jennings, Institute for Educational Leadership Joseph Johnson, University of Texas at Austin Phyllis McClure, Independent Consultant, Washington, DC Jessie Montano, Minnesota Department of Education Mary O'Dwyer, Oyler School, Cincinnati, OH Andrew Porter, University of Wisconsin at Madison Edward Reidy, Kentucky Department of Education Linda Rodriguez, Rodney B. Cox School, Tampa, FL Richard Ruiz, University of Arizona, College of Education Ramsay Selden, Council of Chief State School Officers Maris Vinovskis, University of Michigan U.S. Department of Education Staff Present: Joe Conaty, Office of Educational Research and Improvement (OERI) Alan Ginsburg, Director, PES Robert Glenn, PES Nancy Loy, PES Maggie McNeely, OERI Valena Plisko, PES Jeff Rodamar, PES Sue Ross, PES Elois Scott, PES Charles Stalford, OERI Stephanie Stullich, PES Foundation and Other Organizations Representatives Present: Adrienne Bailey, Council of Great City Schools Paul Barton, Educational Testing Service Susan Gross, National Science Foundation Fritz Mosher, Carnegie Corporation of New York Gene Paslov, New Standards Ed Pauly, Manpower Demonstration Research Corporation Sylvia Robinson, Ewig Marion Kauffman Foundation Richard Sawyer, American College Testing Cheryl Tibbals, New Standards Meeting Summary The following is a summary of the discussions held at the third meeting of the Independent Review Panel (IRP) on September 14-15, 1995. The summary is organized around the five areas of activity at the meeting agenda, including: 1) the presentation of a preliminary outline for the U.S. Department of Education's (ED) Interim Report to Congress on the National Assessment of Title I; 2) a briefing on the IRP's interim statement, which is to accompany ED's Interim Report to Congress; 3) the presentations and panel discussions of six papers concerning key dimensions of the National Assessment; 4) a discussion of ED's current and planned studies concerning federal support for education reform; and 5) a discussion with representatives of foundations and nongovernmental organizations regarding collaborative opportunities to evaluate federal support for education reform. ED's Interim Report to Congress on Title I Val Plisko of ED presented to the panel an outline of ED's Interim Report to Congress on the National Assessment of Title I. The Interim Report, mandated by Congress, is due by January 1996. Ms. Plisko said the report will serve to: 1) provide a baseline to begin to benchmark and chart the progress of Title I; 2) set an analytic agenda for subsequent assessments of Title I and other elementary/secondary programs; and 3) help foster collaboration across federal, state, and local levels of government as well as among government agencies, research institutions, and private foundations. The panel provided feedback on the outline in the following areas: Student Achievement Panel members urged ED to make student achievement a greater focus of the report. As one member noted, the new Title I legislation signals a shift from the old categorical program model to a more systemic, collaborative program model and assumes that this shift will lead to improved student performance. Thus, the report needs to highlight this shift and show how the new legislation is better serving historically disadvantaged students. In addition, many panel members suggested that ED clarify the meaning of the phrase "Exchanging Greater Flexibility for Greater Accountability," to stress the fact that accountability refers to student performance, not simply compliance with federal rules and regulations. Defining Reasonable Expectations Given the importance of student achievement, several panel members suggested that ED set reasonable expectations for school and student improvement under the Title I program. One panel member suggested that ED look into how states are defining adequate yearly progress (AYP). To close the achievement gap between advantaged and disadvantaged students, he stressed that AYP must be defined rigorously, so all students are challenged to meet the same high standards. Explaining Results Many panel members urged ED to provide explanatory rather than descriptive data concerning the implementation of the federal legislation. Ideally, this process would involve mapping the changes in the Title I legislation, the changes in the school improvement process, and the changes in student achievement. Such detailed information would provide low-achieving schools with valuable insights that they could use to help all students meet high standards. Panel members also said the report should: Investigate the link between state and national standards, and the extent to which standards vary across states Address the connection between states and big city school districts, and the way this relationship supports or thwarts systemic reform efforts Investigate the extent to which Title I has promoted the coordination of resources and services, and the extent to which this coordination is making a difference for students Provide a multidimensional profile of students served under Title I, including, for example, LEP students, native Americans, and migrant children Include information on the educational attainment of parents, given Title I now allows funding for adult learning Investigate not only the extent to which school support teams have taken form but also the degree to which they have increased the capacity of schools to achieve reform At the end of the meeting, panel members agreed to review parts of the Interim Report. The IRP's Independent Statement to Congress Ed Reidy, chairperson of a subcommittee charged with writing the panel's independent statement to accompany ED's Interim Report to Congress, updated the panel on the subcommittee's progress. Reidy said that the independent statement would highlight three main issues: The lack of coordination between the time needed for the full implementation of Title I and the time Congress expects a report on the effects of the legislation The need for resources to: leverage existing studies, expand existing studies, and create new studies on the implementation and impact of Title I The panel's obligation to define reasonable expectations and to identify measures of progress along the path to success In response to Reidy's briefing, some panel members recommended that the panel carefully frame its concerns over time. That is, instead of just saying that the legislation will take more time, the panel should outline a path of school improvement and identify early indicators of success that could be used as benchmarks. Although the panel generally endorsed this approach, one panel member noted that the panel should be prepared to defend the relationship between the indicators of progress it chooses and student achievement. The IRP chairperson, Chris Cross, asked that a draft of the IRP's Independent Statement to Congress be disseminated to ED staff so that the IRP can obtain some feedback. He said he hopes to finalize the statement in time for the IRP's next meeting in November. ED Presentation and Panel Discussion on Key Dimensions of the Assessment of Title I and Other Federal Programs ED staff and Brenda Turnbull of Policy Studies Associates (PSA) presented background papers on six key research issues identified by the panel at its last meeting: high academic content standards for all children, assessment, professional development, flexibility and accountability, parent involvement, and targeting resources. For each of these topics, presenters identified: (1) the relevant provisions of the new federal legislation; (2) performance criteria and indicators that could be used to determine whether the legislation is taking hold at the federal, state, district, school, and classroom levels; and (3) potential data sources and data gaps. As the panel heard these presentations, a common refrain was echoed: It is impossible to view these topics in isolation from each other; rather, they must be viewed as an integrated whole. For organizational purposes, however, the panel chose to retain the topical distinctions. What follows are highlights from each of the six presentations and discussions: Standards Joe Conaty from OERI outlined the main legislative and research issues surrounding the area of high academic content standards for all children. He said there is a fair amount of data of mixed quality relating to the development of state standards. Preliminary findings show tremendous variability in the way states are defining standards--to the extent that some state standards will probably have very little impact on curriculum and instruction. Mr. Conaty also identified gaps in the research base, including the extent to which: (1) state standards are influencing local curriculum policy and classroom practice; (2) state standards apply to Title I students; and (3) Title I standards are aligned with curriculum and student assessment. In response to this presentation, panel members: Noted several existing projects (e.g., New Standards and the Annenberg Challenge) that could provide data on the research gap concerning local curriculum policy and classroom practice Asked if anyone had a definition for the word "alignment." Although the term is frequently used in reform circles, it will not be realized until educators agree on what it means. Asked for evidence supporting the claim that there are different standards for different students. Some members feared that such talk could create a self-fulfilling prophecy. Questioned the extent to which ED is only interested in nationally representative studies. One panel member suggested that ED not restrict itself to nationally representative data. Questioned whether ED would address the resistance to standards-setting that has emerged within states and localities. In addition, panel members questioned the extent to which the background paper should address the quality of standards. Suggested that ED study how teachers perceive the intent of the new federal legislation. That is, do teachers really believe that all children can learn and do teachers really hold all children to the same standards? Warned ED not to expect changes in classroom practice based on the existence of state standards. That is, ED would be wrong to assume that teachers are not teaching demanding material to all students because standards have not taken hold; similarly, ED would be wrong to assume that classroom change is taking place because high standards exist. Assessment Jeff Rodamar and Elois Scott of PES outlined the main research issues regarding assessment. They said if standards are the guiding light of reform, then assessments are the enforcers. Specifically, the new Title I legislation requires or encourages states to identify what constitutes adequate yearly progress (AYP) and to use state student assessments as a primary measure of such progress. At a minimum, these assessments of AYP must: (1) be the same for all children; (2) be aligned with state content and performance standards; and (3) be valid and reliable for the purposes for which they are used. Thus, any evaluation of Title I would naturally want to look at not only the extent to which state and district assessments meet these criteria but also the extent to which differences in student achievement diminish when all students have access to a high quality curriculum aligned with assessments. The concerns of panel members touched on: The extent to which the Title I legislation mandates the same assessments for all students. At least one panel member questioned the wisdom of this approach, given the profound variability that exists in student learning. Most panel members felt that it was appropriate to use the same assessments for all students and that ED's report should investigate whether or not this is occurring. One member noted that the Title I legislation for student assessment does accommodate students with special needs. The importance of placing Title I assessments in some kind of context. That is, recognizing how: (1) assessments for Title I students relate to state, district, and classroom assessments; (2) assessments of all kinds relate to standards, professional development, and other components of systemic reform; and (3) assessments of all kinds contribute to the school development process, which consists of improved pedagogy, curriculum changes, and student achievement. The extent to which assessments are statistically rigorous and methodologically valid The importance of identifying the multiple uses of assessments--such as accountability and improved instruction--and how these different uses sometimes conflict. Related to this is the question of who uses assessments and when. The fact that people will be looking at state assessment systems in transition over the next five years, and the need to identify and convey the lessons learned from this transition period The fact that assessment systems take more time to implement than other pieces of the systemic reform puzzle, and the difficulty of communicating this fact to the public Professional Development Brenda Turnbull of PSA outlined the main issues surrounding the area of professional development. She said that many organizations are conducting studies of professional development and that this research should yield a bounty of rich, descriptive data. In addition, ED plans to do a considerable amount of survey work at the state, school, and classroom levels. These surveys will largely describe professional development activities and tap perceptions of their effectiveness. Data gaps that remain include: the extent to which professional development reflects recent research on teaching and learning; the financing of professional development; the status of professional development networks; the link between professional development and changes in instruction; and the link between professional development and student achievement. Finally, Dr. Turnbull noted that when assessing professional development, it is important to distinguish between quantity and quality. That is, professional development in large doses does not necessarily ensure improved instruction; rather, research is beginning to reveal that certain types of professional development are more effective than others. In response to this presentation, panel members raised a number of concerns, including: The need for more research on the link between teacher training and student achievement. For example, one member cited a study that, ironically, showed an inverse relationship between teacher training and student achievement. The need to realize that Title I is only a small piece of the professional development puzzle and that teachers receive professional development from a variety of sources, such as teacher networks The importance of framing professional development as a coherent schoolwide undertaking, not as an individual activity The need to look at how professional development helps certain groups of students (e.g., LEP students) learn certain kinds of skills (e.g., how to write) The flaws of teacher self-reports as mechanisms for gauging the effectiveness of professional development. As one member noted, the more knowledgeable teachers are about professional development, the more likely they are to misrepresent the role of professional development in their schools. The difficulty teachers have finding time for professional development, given that their primary responsibilities lie with students The need for longitudinal data on professional development. As one member noted, professional development often shows success in the short term, but then over time, teaching practices revert to their baseline level. The fact that professional development involves not only teachers, but principals, administrators, aides, and parents Flexibility and Accountability Val Plisko and Sue Ross of ED outlined the major research and data collection issues surrounding the area of flexibility and accountability. They told the panel that the new federal legislation was designed to support decentralized decisionmaking and increased flexibility so that schools might craft whole- school approaches to reform. Given these new goals and priorities, there is a need to look at how these changing relationships are playing out at the federal, state, district, school, and classroom levels. An area of special interest is the new waiver authority provided under ESEA and Goals 2000, along with the implications of these provisions across the various levels of school governance. As for data gaps, there is a need to clearly define terms such as "flexibility" and "comprehensive approaches" and to examine the changing relationships between Title I schools and federal-state-local governance systems. In response to this presentation, panel members: Expressed concern that ED was focusing too narrowly on Title I and not clearly addressing the relationship between Title I and other federal programs Urged ED to answer the question "flexibility for what?" One panel member suggested that the point of giving states, districts, and schools increased flexibility was to ensure high standards across different settings. Noted that despite the existence of waiver authority, few states have taken advantage of it. On a related note, one member asked what was being done to make states, districts, and schools aware of their options under the new federal legislation. Noted that the barriers to flexibility are not only legal, but attitudinal. That is, even if policymakers were to remove all the legal barriers, opportunities for flexibility would still be bound by traditions and norms. To help break through this traditional mindset, one panel member suggested providing descriptions or vignettes of successful schools that operate under a flexible system. Expressed concern over the implications of Title I becoming two programs, schoolwide and targeted assistance. One panel member said that this division has caused great confusion at the state and local levels. Urged ED to pay more attention to certain legislative provisions and their effects. For example, Title XIV allows districts to consolidate administrative dollars, and its corollary at the state level allows states to do the same. Family and Community Involvement Alan Ginsburg of ED outlined the major research issues surrounding the area of family and community involvement. Family involvement is identified as: (1) a component of most of ED's new legislative authority; (2) recognized as a national education goal by Congress; and (3) supported through the "Family Involvement Partnership for Learning." Initiatives to strengthen and improve family involvement seek to increase awareness of the importance of family involvement, increase commitment and action, and increase capacity to provide effective family involvement. One large research gap is the need for control group or intensive process-outcomes studies of specific interventions, particularly in low-income communities. Also needed are surveys of business and community support for family involvement. After listening to the presentation, panel members raised a number of questions and concerns, including: The question of whether this paper on family involvement could reference, as did the paper on professional development, Goal 8 of the National Education Goals. The paper could then use some of the indicators developed by the National Education Goals Panel. The matter of whether parent-school compacts are being misconstrued by communities. As one member lamented, although parent-school compacts were designed to lay the foundation for meaningful school-home partnerships, they are being interpreted as mechanical written contracts that hold little substantive value. On a related note, parent information and resource centers are also being interpreted in a very narrow sense. Policymakers need to clarify the original intent of these initiatives. The asymmetry of school-family relationships. As one member noted, most approaches to community involvement rely on parents changing in relation to schools; however, it is just as important for schools to change in relation to what they see happening in homes. The need to place greater emphasis on adult education and literacy. As one member noted, to become true learning places for families, schools must attend to the education and vocational needs of parents. The need to make "continuous improvement in the area of community involvement" an objective of the new initiatives. As one member noted, many components of the new legislation discuss involving parents in an ongoing, sustained fashion. Targeting Resources Stephanie Stullich of ED outlined the major issues concerning the targeting of resources. Stullich informed the panel that the allocation of Title I funds represented one of the more contentious issues in the new legislation. The major questions concerning allocation are: Who does the money go to? How much money does each level (state-district-school) receive? and How is the money used? Stullich believes very little data on targeting and resource allocation issues will be available aside from what ED collects through the National Assessment. In response to the presentation, panel members recommended: The importance of seeing Title I outlays in the context of state and local outlays The need to explore the implications of new provisions in Title I legislation that encourage a greater allocation of funds to high schools and middle schools The need to address issues that are specific to targeted assistance programs ED's Current and Planned Studies of Federal Education Programs Panel members heard presentations concerning four studies of federal education programs: ED's planned longitudinal evaluation of school change and performance; OERI's fast response survey of teachers and schools; the PSA/CPRE state survey; and ED's proposed study of participation of private school students in Title I. The Longitudinal Evaluation of School Change and Performance Val Plisko of ED briefly outlined ED's planned longitudinal evaluation of school change and performance. The evaluation will look at Title I and other federal programs to determine the extent to which federal initiatives are helping or hindering school reform at the school and classroom level. Ms. Plisko asked the panel for advice on a number of research and design issues, including: (1) the issue of whether to make the study nationally representative; (2) the choice of school level; (3) the choice of grade level; and (4) the question of how to assess student performance. On whether or not to make the study nationally representative, the panel generally agreed that it would be more beneficial for ED to design a study that focuses on particular policy concerns and to rely on other studies for national-level estimates. This way, ED could gather rich, in-depth information and construct specific stories of school change--stories that people will remember. However, the panel postponed comments on the other issues and requested a short statement from ED on their research priorities. Panel members felt that once ED identified the major questions it wants to address, the panel would provide feedback on some of the study's more technical design and measurement issues. Fast Response Survey Charlie Stalford of OERI briefly updated the panel on ED's efforts to develop a fast response survey of schools and teachers, a collaborative project involving OERI, PES, and the National Center for Education Statistics (NCES). He told the panel that the revised surveys addressed many of the concerns raised by panel members during the last meeting of the IRP, such as the use of unfamiliar language and the inclusion of questions asking for broad opinions. He hoped the revised surveys would serve as valuable tools for assessing the progress of federal reforms at the school and classroom levels. Despite the revisions made to the surveys, panel members raised a number of important concerns. For example, one member questioned the use of questions that attempt to gauge attitudes and perceptions. He urged survey designers to ask more behavior-oriented questions that tap into what teachers actually do. He said that questions on perceptions still serve a purpose but that they must be analyzed in conjunction with more behavioral responses. Panel members generally agreed with these comments. A second member said that a fast response survey should focus on schools, not teachers. She said that given her experience with teacher surveys, it would be hard to obtain any kind of valid, reliable information with these kinds of questions. "Fast response and teacher survey is an oxymoron," she remarked. Panel members raised other concerns and suggestions. That is, the survey: Should probably avoid the word "standards," as many teachers and administrators see standards-based reform as a top-down imposition Should include parents, not just teachers and administrators, to ensure consistency with the drive for community and family involvement Should link to the five or six research areas that ED deems to be its major areas of interest PSA/CPRE State Survey Brenda Turnbull of PSA briefly outlined a study PSA and CPRE are conducting for ED titled, "Evaluating States' Planning and Implementation Efforts Regarding the Goals 2000: Educate America Act and the Elementary and Secondary Education Act." The evaluation will focus primarily on the implementation of federal programs in the context of state reform efforts. The analyses will draw on surveys of program managers and other state-level leaders in all 50 states and on site visits to classrooms, schools, and district offices in 12 districts within four states. Based on the advice of the IRP, these analyses will address the topics of standards, assessment, professional development, flexibility and accountability, and the targeting of resources. In response to this briefing, several panel members questioned whether the study--when looking at standards and assessment-- would make judgments on their quality. Dr. Turnbull explained that the study would describe state standards and assessments but that it would not make judgments on their adequacy. As discussion on this issue ensued, panel members generally agreed that it would not be appropriate for this study--which is being funded by ED--to take up the question of quality. It was agreed that another research group--perhaps a private foundation--would be better equipped to deal with such a politically volatile issue. Another panel member questioned whether the study would only conduct field research for the 1995-96 school year. If this were the case, the member said, the report would not be able to demonstrate progress. Dr. Turnbull replied that the study is a baseline study and is not intended to stand alone. Val Plisko of ED added that ED plans to do another study in two years, once states have had more time to fully implement Title I and other federal legislation. Proposed Study of Participation of Private School Students in Title I Robert Glenn of ED outlined a proposed study of how public school districts provide compensatory education services to eligible students attending private schools. Glenn informed the panel that the study will be conducted in two phases. In the first phase, an issues paper will set forth the long-term analytic agenda on issues concerning the provision of Title I services to private school students. Also in this first phase, ED will develop a Quick Response Survey (10 to 12 questions) on how private school children are being counted for Title I services and on how districts are responding to changes in the law. In the second phase, the data from the issues paper and initial survey will be used to shape a stand-alone full report on the status of Title I service delivery to private school students. This report will be based on more detailed and extensive data collection from district and private school officials during the 1996-97 school year. Unfamiliar with many of the issues surrounding the study, panel members generally refrained from offering advice and probed Glenn for more background information, including information on the cost of the study and on the number of private school students served by Title I. Glenn responded that he could not yet estimate the cost, as ED is still working out the details of the study. He did provide an estimate of private school students served under Title I: between 159,000 and 215,000 students. Panel members urged ED to clarify the objectives of this study and its importance relative to other studies. Members felt that once ED set these priorities, the panel would be better able to provide useful feedback on the study's mission and design. Discussion with Foundation and Nonprofit Representatives The panel invited representatives of nongovernmental organizations and private foundations to attend the meeting and to discuss their research efforts in the area of education reform. Representatives from the following organizations briefed panel members on their recent work and suggested opportunities for collaboration. National Science Foundation (NSF) Susan Gross of the National Science Foundation told the panel that NSF is doing some work in the area of assessment, a decision which was prompted by work done by PSA in support of the evaluation of the State Systemic Initiative (SSI). PSA's report, Assessment Programs in the Statewide Systemic Initiatives (SSI) States: Using Student Achievement to Evaluate the SSI showed that the alignment of statewide assessment systems to systemic reform was not strong. In an effort to inspire closer alignment, NSF is hoping to provide the field with a prototype test or item bank that is drawn from NAEP (National Assessment of Educational Progress), TIMSS (Third International Mathematics and Science Study), and SASS (Schools And Staffing Surveys). NSF hopes to enter some language into the agreements with future SSI grantees that will require the grantees to use the item bank. This project, however, is still in its preliminary stages; NSF is holding a small meeting on October 6 to determine the extent to which NAEP and TIMSS items can be used. Dr. Gross pointed out that there are some good TIMSS items that have been developed but never used. Dr. Gross also explained that part of NSF's agenda is to support research and development in assessment that includes, for example, looking at the psychometric properties of performance assessment. Moreover, Dr. Gross mentioned that NSF has been thinking about doing some strategic upgrading of the NAEP and TIMSS samples to allow for more in-depth comparisons in certain areas. She explained that presently it would be difficult to tease out SSI vs. non-SSI states, or school-level cuts (e.g., middle vs. high school) in either the NAEP or TIMSS data. Finally, Dr. Gross updated the panel on the evaluation of SSI. She said that three reports will be released in the near future addressing the prospects for scaling up, professional development, and student outcomes. Preliminary findings show that a growing consensus is emerging on what a coherent math and science education should look like. However, stumbling blocks remain, including aligning assessment with state reform and going to scale with reforms when resources are scarce and public support is limited. As NSF tackles these issues of sustainability in a fragile political environment, Gross stressed the importance of continuing a dialogue with others in the education research community. After listening to this briefing, one panel member noted that NSF has not been the lightning rod for criticism that ED has been over the years. Thus, even as resistance to systemic reform and standards grows, NSF might still be in a better position than ED to address such politically sensitive issues as the quality of state standards. Although most panel members agreed with this perception, one member reminded the panel of an incident during which the education division of the NSF took considerable heat for disseminating a controversial curriculum framework to schools. Educational Testing Services (ETS) ETS representative Paul Barton informed the panel that although his organization was not active in matters of elementary education, he was unaware of how broad the panel's charge was, and he promised to give the issue of identifying data sources more thought. Dr. Barton did suggest that ED look at the National Assessment of Educational Progress (NAEP) as a potential source of data on measures of student progress toward national standards. Dr. Barton referred the panel to the recent ETS report Reaching Standards: A Progress Report on Mathematics, which uses NAEP data to provide a progress report on moving student achievement toward the National Council of Teachers of Mathematics (NCTM) standards. In addition, Dr. Barton suggested that the NAEP school and principal questionnaires might provide useful data on what schools are doing that is different as a result of state or local policy changes and on what those policy changes were. Although panel members did not come to a consensus on the practicality of using NAEP data to track student progress, they did agree that the interim report should speak to issues of student achievement. However, one panel member said that for the time being, ED could avoid large-scale data issues and focus on specific cases of successful reform. Then, in later reports, it could address the generality of reforms nationwide. Panel members also asked whether NAEP had changed in recent years to include data on specific groups, such as LEP students and students with disabilities. Dr. Barton said that he did not know the answer to that question but that he would try to get that information for the group. New Standards Project New Standards representatives Gene Paslov and Cheryl Tibbals briefed the panel on the objectives and status of the New Standards project. They said that during the last year and a half, New Standards staff have worked closely with the National Council of Teachers of Mathematics (NCTM) and the National Council of Teachers of English (NCTE) to convert academic content standards into student performance standards and to link these standards to performance exams in English/language arts and math. Moreover, they are in the process of developing a protocol that would allow states involved in the New Standards project to link their own standards and assessments to those developed by the project. Paslov and Tibbals said the project will be ready to go with its work in the spring of 1996. Responding to the presentation, panel members questioned the project's link to Title I and the prospect of collecting information on the number of Title I students in the project. Paslov said the project was capable of collecting that kind of information. Panel members also asked if New Standards looked at other frameworks--such as NAEP's--when developing their own standards. Paslov and Tibbals replied that they looked at all available documents on the subject but that they could not guarantee perfect alignment in all cases. Finally, panel members expressed some skepticism over the effort to link New Standards with state standards and assessments. They warned New Standards to proceed with caution because they are pioneers in a territory filled with untested hypotheses. Paslov and Tibbals replied that they were realistic about the magnitude of the challenge they faced; they added that they were more secure about linking standards than they are about linking assessments. American College Testing (ACT) Representative Richard Sawyer told the panel that ACT offers many different tests of students' educational development--the common link being that all are achievement-oriented and curriculum- based. He said that although the organization's involvement in Title I has been intermittent and primarily at the request of clients, they can prepare reports of Title I students who take the pre-college ACT exam; however, the inferences that can be made from such reports are limited. Sawyer also described two research projects that he thought would be of interest to the panel. The first project seeks to identify the courses that influence student success as measured by test scores while controlling for background characteristics. For example, students who take algebra and foreign languages have demonstrated higher performance on test scores than students who do not. Although panel members found this information interesting, one member stressed that the reasons for these results must be explored before the findings are disseminated to the public. The second project seeks to match databases for different ACT exams in an effort to track the progress of students over time. Sawyer said that if people were interested, ACT would have the capability to focus these studies on Title I students. Panel members expressed their willingness to learn more about the potential uses of this research. Manpower Demonstration Research Corporation (MDRC) Program representative Ed Pauly described a project MDRC is working on with other organizations that attempts to describe and identify the consequences of local reform initiatives. The project will pay particular attention to professional development and tie into Title I wherever possible. Pauly also described some work in the area of welfare-to-work and school-to-work. Of particular interest to Dr. Ginsburg was a study looking at the educational development of children whose parents receive AFDC and at the potential link to issues of family literacy. Carnegie Corporation and the Kauffman Foundation Because of time constraints, Carnegie Corporation representative Fritz Mosher and Kauffman Foundation representative Sylvia Robinson were unable to provide an overview of the work of their respective organizations. However, both said they benefitted from the discussion and expressed their willingness to contribute wherever possible. Summary of the Title I/ESEA Independent Review Panel Meeting November 30 - December 1, 1995 Independent Review Panel (IRP) Members Present: Marilyn Aklin, National Coalition of Chapter 1/Title I Parents Joyce Benjamin, Oregon Department of Education Rolf Blank, Council of Chief State School Officers Elizabeth Bright (for Linda Rodriguez), Pasco County School Board, Land O'Lakes, FL George Corwell, New Jersey Catholic Conference Christopher Cross, Council for Basic Education Sharon Darling, National Center for Family Literacy Bill Demmert, Western Washington University Susan Fuhrman, University of Pennsylvania, Consortium for Policy Research in Education Joseph Johnson, University of Texas at Austin Phyllis McClure, Independent Consultant, Washington, DC Jessie Montano, Minnesota Department of Education Jennifer O'Day, Stanford University, School of Education Edward Reidy, Kentucky Department of Education Richard Ruiz, University of Arizona, College of Education Ramsay Selden, Council of Chief State School Officers Diane Stark (for Jack Jennings), Center on National Education Policy Maris Vinovskis, University of Michigan U.S. Department of Education Staff Present: Joanne Bogart, Planning and Evaluation Service (PES) Melissa Chabran, PES Joe Conaty, Office of Educational Research and Improvement (OERI) Alan Ginsburg, Director, PES Robert Glenn, PES Nancy Loy, PES Maggie McNeely, OERI Valena Plisko, PES Jeff Rodamar, PES Sue Ross, PES Elois Scott, PES Stephanie Stullich, PES Congressional Staff Present: Tony McCann, House Appropriations Committee Denzel McGuire (for Sally Lovejoy), House Economic and Educational Opportunities Committee Meeting Summary The following is a summary of the discussions held at the fourth meeting of the Independent Review Panel (IRP) on November 30 - December 1, 1995. The summary is organized around the four areas of activity on the meeting agenda, including: (1) a discussion regarding the Draft Interim Report of the National Assessment of Title I; (2) a discussion regarding the IRP's statement to Congress; (3) a discussion regarding the U.S. Department of Education's (ED's) proposed longitudinal evaluation of school change and performance; and (4) a presentation by the Ethics Counsel on laws and regulations that apply to IRP members. The Interim Report IRP members spent almost a full day providing feedback on ED's Draft Interim Report of the National Assessment of Title I. The Interim Report, mandated by Congress, is due by January 1996. The following review, which highlights some of the main points made by IRP members, is composed of two main parts: the first part addresses points relevant to the Interim Report as a whole; the second addresses points relevant to specific sections of the report. IRP Comments on the Report as a Whole State clearly what the Assessment can realistically accomplish, given resource and time constraints. For example, panel members generally agreed that the report would not be able to answer many student performance questions by 1998. The bullets at the end of each section should show when reports will be produced from each study. Others suggested that the Interim Report cite previous national assessments and explain what they were able to accomplish with available funding. Seek out additional data sources. Given the desire to answer student performance questions but the reality of funding and time constraints, panel members considered whether ED might "piggy back" onto other sources of data. Include the theme of parent and community involvement throughout the report. Pay more attention to language development as a basis for achieving high standards in all subject areas. That is, if schools want to develop high standards, they should focus on language development issues much the same way they focus on professional development and other elements of school success. Panel members thought that this notion of language development could be addressed in several sections, especially Section 2 (Challenging Standards) and Section 5 (Parent Involvement). Make clear who "we" is throughout the report. ED should specify that the expectations in the "What We Expect" sections of the report reflect benchmarks in ED's strategic plan. One member suggested dropping the "we" altogether, but others thought that the "we" could stay as long as ED explicitly states that the report represents the perspective of ED and not that of the IRP or some other group. Clarify the information sources used. Panel members felt that when stating facts, data, or research findings, ED should cite relevant sources and studies. Use more boxes, sidebars, charts, and real-world examples throughout the report to illustrate points made in the text. When providing examples, make it clear that there is no one right model, but that there are many different ways to approach issues of school improvement. Effectively incorporate various parts of the legislation throughout the report. For example, the report should address how dropping restrictions on LEP students will affect data reports on Title I. Also, ED should do a better job of highlighting the new model of parental involvement promoted in the legislation. Effectively incorporate the key issues identified by the IRP. Although IRP members unanimously praised the box on page 7, they felt that subsequent sections of the report needed to consistently refer back to these issues. IRP Comments on Individual Sections Foreword Make the foreword section crisper and more succinct. Many members said the foreword needed to provide a clearer frame for the whole report, highlighting what would be covered throughout. Other members suggested creating more symmetry between the foreword and the final section (Section 7) of the report. Introduction Specify what is different about the new Title I legislation. For example, show how the new Title I legislation views education reform primarily as a state and local effort, with the federal government playing more of a support role. One member suggested that these ideas could be addressed under the heading "The Significance of Title I" on page 5. State earlier in the introduction that Title I deals primarily with low-income students. Provide some background information on the IRP and its functions before presenting the box on page 7. Change substantially the diagram on page 9. Almost all panel members felt the diagram was inaccurate and politically problematic. Most agreed that the diagram should be redrawn to show that challenging standards come not from the federal government but from the states, with input from communities, schools, and parents. Section 1: Baseline Information on Student Performance and Title I Participants Discuss what can be learned about student performance by 1998. Some members suggested that ED pull data on student performance from states that already have challenging standards, aligned assessments, and information systems in place. However, others argued that because of the great variation among state standards and assessments, one would have a hard time drawing national conclusions from such data; thus, data from nationally normed assessments might be more appropriate. Still, supporters of the state assessment approach argued that using nationally normed assessments contradicts the spirit of the legislation that is asking students to learn more through state-defined content and performance standards. Although the panel reached no consensus on which approach would be better, they agreed to have the Data Subcommittee explore the issue further. Recognize that an increase in the number of students served by Title I does not necessarily imply that student needs are being met, especially the needs of historically neglected students. Provide data on student participation in Title I broken down by race, income, and gender. However, some members said ED should stop to consider the rationale for comparing test scores by race. For services provided to nonpublic school students, use the word "equitable," not "equivalent." Recognize the gap between those students eligible for Title I and those actually participating, and address reasons for this gap. Explain that parents are now part of the Title I target population, and address the need to develop mechanisms for measuring the extent to which parents are included in the program. Section 2: Reform through Linking Title I to Challenging Academic Standards Use and repeat the phrase "state-adopted standards." Change the phrase "national standards" (last paragraph, page 18) to "voluntary discipline standards." In the same sentence, one member suggested removing any mention of history standards. Recognize that states should eventually set content standards in subject areas beyond reading and mathematics. Remove the bracketed bullet on the bottom of page 19 describing the number of states adopting assessments aligned with state standards. A member who collects data on this subject said it is unrealistic to expect to have this information any time soon. Provide some information on the cost of developing state assessment systems. The panel agreed that ED should profile the assessment systems of two or three states; at least one state should be small--maybe Delaware--and at least one should be large-- maybe California. Move the first two paragraphs of page 20, which basically constitute a rationale for standards-based reform, to an earlier part of the section. Section 3: Focus on Teaching and Learning Add a sentence in the first full paragraph on page 5 describing how attention to higher order thinking skills will facilitate student learning of both basic and advanced skills. Provide some concrete, real-world examples of the kinds of professional development that will make a difference in classroom instruction and student learning. Reconceptualize professional development in broader terms. For example, one member said that for teachers, becoming knowledgeable about the communities they serve should be an element of their professional development; the program called Funds of Knowledge for Teaching could be the subject of a boxed example. Enlarge the scope of planned studies. One member said that because of the questionable use of the Fast Response Survey and the narrow focus of the study of migrant children, it seemed as if ED was putting "too many eggs in the longitudinal basket." Agreeing with this observation, other members suggested that ED use other data sources, such as state assessments, NAEP, and upcoming studies of CPRE, the Pew Charitable Trusts, and the Rockefeller Foundation. Section 4: Flexibility Coupled with Increased Responsibility for Student Performance Note that there are flexibility provisions in other federal education programs, including Goals 2000, other titles of ESEA, and the reauthorization of the Perkins Act. ED may want to highlight these provisions in a box or table. Recognize that the new legislation not only provides for increased flexibility but fundamentally reconceptualizes the relationship between the federal government and the states. Change the phrase "...districts are required to take corrective actions against a school..." on the top of page 32. As is, the sentence sounds too harsh and draconian. Expand the example on page 33 to define what effective professional development means. Recognize that increasing the number of schoolwide programs does not necessarily imply that schools are using federal dollars to implement comprehensive reforms as a way to meet the needs of all students in high poverty schools. Accordingly, in the first bullet on page 35, change "would indicate" to "might indicate." Emphasize that the reason for providing flexibility is to improve student learning. Consider conducting a study on the way states are approaching the notion of adequate yearly progress. As one panel member noted, if adequate yearly progress is not defined rigorously, then the accountability side of the equation loses all value. Section 5: Partnerships with Families and Communities to Support Learning Establish a more symmetrical relationship between schools and families. Panel members criticized this section for creating the false impression that parents are instruments for "fixing" children and that they possess most of the responsibility for getting involved in schools, when, in fact, the new legislation recognizes that schools are equally obliged to reach out to families and communities. Members agreed that the report must do a better job of highlighting what schools can do to influence the extent to which parents take part in their children's learning. Identify the opportunities schools have to collaborate with other social service agencies that support children and families, including the availability of "five percent glue money." Recognize that Title I compacts can include students, not just parents and schools. Explore the links among parental involvement, language development, and student achievement. As one panel member said, "Parent involvement is the language development of kids." Consider eliminating the charts on page 41. Some members questioned whether these charts are relevant and why there were no similar graphics provided elsewhere in the report. One member said that a descriptive profile of a model school would be more appropriate. Section 6: Effective Targeting of Title I Resources Address the mismatch between Title I resources and programmatic expectations in high poverty schools with schoolwide programs. Gather data on per-pupil expenditures in schoolwide programs. Contrary to popular belief, Title I money has not necessarily been targeted to high poverty schools, making it difficult for these schools to meet the high expectations set by Congress. Although some panel members thought it might be more appropriate to address this concern in the IRP's independent statement to Congress, most felt that ED's report should mention it as well. Consider collecting detailed, school-by-school data on poverty and Title I allocations from the 15 largest cities; a panel member contended that such a study would be more informative than a nationally representative survey. Be careful about raising the issue of shifts in poverty among states; this topic can be a hornets' nest, and there is sampling error in the data. Section 7: Plans for Evaluating Title I Make the section stronger and tighter. If possible, link the organization of Section 7 to the organization of the foreword. Use the phrase "school improvement" rather than "school reform" to ensure consistency throughout the report. Address the effects of new Title I provisions on middle and high schools. For example, how many middle and high schools now receive Title I money? What are they doing with the money? When summarizing main points, refer the reader back to earlier sections of the report. This way, the reader will know where to turn for more information on a particular topic. IRP's Independent Statement to Congress Ed Reidy, chairperson of the subcommittee charged with writing the IRP's Independent Statement to Congress, presented a draft statement to panel members. Upon reviewing the document, panel members addressed a range of issues, including: The need to state assertively the timing and funding constraints facing the Assessment. Panel members generally agreed that the draft statement did not sufficiently highlight the group's main concerns. These concerns include: (1) unreasonable reporting timelines--many important provisions of the legislation do not go into effect until after 1998, the year the final report is due to Congress; and (2) resource constraints--critical data gaps will only be filled to the extent that funding is available. Members agreed to reorganize the statement to state first, the mission and charge of the panel, and second, the limitations of the 1998 report, given the realities described above. Members also agreed that the letter should recommend an additional report in December 1999, when more could be said about the effects of Title I. Until that time, members suggested that ED provide Congress with periodic reports on the implementation of Title I. The means to effectively present their statement to Congress. The panel agreed to present the report in letter form, with Chris Cross signing his name on behalf of the other panel members. The letter would accompany ED's report to Congress, although panel members did not decide on the best way of attaching it. Panel members stressed having the letter convey the independence of the IRP throughout their affiliation with ED. One panel member suggested that a member of the panel, perhaps Chris Cross, brief some Appropriations Committee members concerning the letter and its purpose. The central questions to be addressed by the Assessment. After expressing some concern over the two central questions featured in the draft independent statement, panel members felt that it made the utmost sense to use the key issues identified by the IRP as the organizational framework for the statement (see box on page 7 of ED's draft interim report). Aside from succinctly capturing the concerns of the IRP, panel members felt that the use of these questions would ensure consistency between ED's report and the independent statement. In addition to these key issues, panel members considered whether the Assessment should address the following question: Is the achievement of at-risk students improving relative to that of students who are not at risk? Although most believed this to be a central question, many questioned the likelihood of a reliable answer to this question by 1998. The tone of the independent statement. In contrast to ED's formal report, some members suggested that the letter establish a more casual--if not conversational--tone, using the first person and punchy statements wherever possible. The distinction between the IRP's responsibilities and ED's responsibilities. Some panel members noted that the draft independent statement created the impression that the IRP was expected to do things above and beyond its mission. To avoid any misunderstanding, Dr. Ginsburg suggested that the independent statement use the phrase "The Assessment will..." when referring to the goals and activities of ED and the IRP. After reviewing the draft, the panel agreed to have Brenda Turnbull of Policy Studies Associates (PSA) write the next draft of the independent statement. The Longitudinal Evaluation Joyce Frechtling of Westat offered some initial thoughts on the design of ED's proposed longitudinal evaluation of school change and performance. The purpose of the evaluation is to track changes in the reading and mathematics achievement of elementary-level Title I students and to determine the extent to which Title I and other federal programs support curricular and instructional practices enabling students to meet challenging standards. Generally speaking, the study will focus on what happens in the neediest schools in the most "pro-reform" states. However, within these broad parameters, Ms. Frechtling recognized that the study faces many unanswered questions. Because the study faces significant cost constraints, she said ED and its research partners need to establish the priorities of study--and given these priorities--the most appropriate data collection methods. As panel members reviewed the draft design prepared by Ms. Frechtling, they grappled with many issues, including: The rationale for focusing on "pro-reform" states. Ms. Frechtling said she would like to focus on states already having the main elements of systemic reform in place, such as challenging standards, curriculum frameworks, and aligned assessments. With this focus, the driving question of the study becomes: In states where reform is occurring as envisioned, what is happening to those students who have been the hardest to reach? Although panel members generally agreed with this "pro-reform" approach, some raised concerns. For example, one panel member suggested that systemic reform could become discredited over the next decade; if this were to happen, what would be the value of a study focusing primarily on standards-based reform? A second member cautioned that a focus on pro-reform states might disregard important developments in states that have been traditionally reluctant to engage in reform. He also felt that there would be much to gain by comparing "pro-reform" states with states that have not embraced the systemic model as wholeheartedly. Other panel members contended that even with a focus on pro-reform states, one would still see tremendous variation among the states and schools ultimately selected for study. The number of states to be included in the study. Ms. Frechtling said that this number had not yet been determined and that it would depend on the scope and depth of data collection. The extent to which the study will focus on Title I. As Ms. Frechtling noted, because the study is looking at changes in instructional practices and student learning, it will be impossible to attribute outcomes to Title I exclusively. At the same time, the study still expects to look at elements of reform specific to the new Title I legislation. Thus, the study will need to strike a reasonable balance on this issue. The prospect of linking the study to other research efforts. As Ms. Frechtling noted, she does not see the longitudinal study as a stand-alone study, but as a piece of the evaluation puzzle that will contribute significantly to the Assessment when viewed in conjunction with other research efforts. Panel members echoed this sentiment and suggested that Frechtling and others consider selecting states where extensive data have already been collected. The need for this study to capture the dynamics of the change process. One panel member questioned what distinguished this longitudinal evaluation from past studies. Given the complexity of systemic reform and the imprecise parameters of the independent variable, he said that this study should pay greater attention to understanding the change process. He suggested designing interviews or surveys that generate data comparable with that yielded by case studies. Ms. Frechtling agreed with these observations, saying that she might be willing to sacrifice "statistical soundness" to get a more in-depth picture of how change takes place. The appropriate data collection instruments. Panel members once again debated what data collection tools would be more appropriate for measuring student achievement: standardized, national-level assessment or state-level assessment instruments. One member argued that national assessments are valuable because they provide a common metric for comparing student progress across states and schools. Recognizing the problem of alignment, he suggested revising the national assessment instruments to better reflect state standards. However, others contended that even if national assessments were aligned perfectly with state standards, they would still have problems. For example, one member noted that although state assessments often form the basis of a high-stakes accountability system, national exams provide no incentive for teachers and students to perform well. On a more philosophical level, another member noted that the study has no choice but to focus on state assessments because state-level reform is the focus of the new Title I legislation. In the end, most panel members agreed that some mix of national and state level student assessment instruments would be necessary, although the relative weight of each approach was not determined. The feasibility of using NAEP data. Although Ms. Frechtling said that she is considering using NAEP for mathematics achievement data, she questioned this approach should the data not: (1) provide school-level information; and (2) be statistically valid. Some panel members said that both conditions should be met before using NAEP data. The notion of "value added." Dr. Ginsburg questioned whether the evaluation should address the notion of value added--that is, recognize that students start out at different points in their development and somehow account for these differences in measuring the effect of school reform on student performance. One panel member said that although this notion of value added makes perfect sense in theory, it should only be measured if it can be done precisely and in a way that statisticians and the general public can understand. Ethics Committee Presentation Susan Erdeky and Jeff Morhardt from the Office of the General Counsel informed panel members of their legal responsibilities as consultants to the federal government. Generally speaking, Erdeky and Morhardt said that federal law prohibits panel members from participating in matters that are of direct personal or organizational financial interest. Such matters might include the drafting or reviewing of an RFP or the design of a specific study. These rules protect the competitive process so that no one individual or organization receives an unfair competitive advantage. After this briefing, many panel members expressed concern that the line between fair and unfair play was vague. For example, one panel member wondered whether his participation in the IRP would preclude his university from applying for a grant or contract in the future. Another panel member asked if his participation in the review of ED's longitudinal evaluation would bar him from contributing to the study at a later time. In response to these concerns, Erdeky and Morhardt replied that the rules are not always hard and fast, but that at this early stage, no member seemed to be in danger of committing an infraction. They said they simply wanted to present these issues to the panel so that members would have knowledge of the law and of their responsibilities under the law. They also suggested two ways of mitigating the allegation of unfair play: (1) publicizing information covered during the IRP meetings (e.g., in a technical library) so that all organizations share the same competitive advantage; and (2) including people of many viewpoints in the review and discussion of a particular topic so that the biases of various individuals and organizations cancel each other out. However, they cautioned that meeting these conditions does not definitively protect against a bid protest from a competing organization alleging a conflict of interest. Panel members asked for the telephone members of the presenters so that they could seek legal counsel as needed. Next Steps The panel agreed that the Data Subcommittee would meet in the near future to discuss design and cost issues for the longitudinal evaluation and other studies. The panel also considered the possibility of organizing additional subcommittees to discuss other key topics, such as family involvement. The panel tentatively scheduled three agenda topics for the next IRP meeting, which will be held on March 7- 8, 1996: (1) a briefing on the work of the Data Subcommittee; (2) a briefing on NAEP's capabilities and research efforts; and (3) a discussion of the panel's second mandate to evaluate the effect of federal programs on state and local elementary and secondary education reform efforts. The panel tentatively scheduled a subsequent meeting to be held on August 8-9, 1996. Summary of the Title I/ESEA Independent Review Panel Meeting May 18-19, 1995 Independent Review Panel (IRP) Members Present: Marilyn Aklin, National Coalition of Chapter 1/Title I Parents Joyce Benjamin, Oregon Department of Education David Cohen, University of Michigan George Corwell, New Jersey Catholic Conference Christopher Cross, Council for Basic Education Susan Fuhrman, Rutgers University, Consortium for Policy Research in Education Janis Gabay, Junipero Serra High School, San Diego, CA Joseph Johnson, University of Texas at Austin, Dana Center Phyllis McClure, Independent consultant, Washington, D.C. Jennifer O'Day, Stanford University, School of Education Mary O'Dwyer, Oyler School, Cincinnati, OH Edward Reidy, Kentucky Department of Education Linda Rodriguez, Rodney B. Cox School, Tampa, FL Richard Ruiz, University of Arizona, College of Education Maris Vinovskis, University of Michigan Individuals Attending on Behalf of IRP Members: Joan Herman, University of California, Center for Research on Evaluation, Standards, and Student Testing (Eva Baker) Bev Bing, National Center for Family Literacy (Sharon Darling) Diane Starke, Center on National Education Policy (Jack Jennings) Mavis Sanders, Johns Hopkins University, Center for the Social Organization of Schools (Joyce Epstein) Todd Wagner, Minnesota Department of Education (Jessie Montano) U.S. Department of Education Staff Present: Joanne Bogart, Planning and Evaluation Service (PES) Mike Cohen, Special Assistant to the Secretary Joe Conaty, Office of Educational Research and Improvement (OERI) Thomas Fagan, Director, Goals 2000, Office of Elementary and Secondary Education (OESE) Alan Ginsburg, Director, PES Mary Jean LeTendre, Director, Compensatory Education Programs, OESE David Mack, OERI Tom Payzant, Assistant Secretary, OESE Valena Plisko, PES Mike Smith, Under Secretary Bayla White, Director, Office of Migrant Education, OESE Susan Wilhelm, Special Assistant, Compensatory Education Programs, OESE Meeting Overview The first meeting of the Independent Review Panel, convened to address the mandated National Assessment of Title I and comprehensive evaluation of the impact of federal programs on education reform efforts, was held at the U.S. Department of Education on May 18-19. The meeting was designed to provide panel members with (1) an overview of the principles and key provisions of Title I and other federal programs, (2) an opportunity to discuss the panel's role and substantive agenda, and (3) an overview of relevant current and proposed studies and evaluations conducted through the U.S. Department of Education and other organizations. Department of Education staff from the Office of Elementary and Secondary Education, and the offices of the Secretary and Under Secretary provided a broad overview of priorities established under recent legislation, including the reauthorized Elementary and Secondary Education Act (ESEA) and the Goals 2000: Educate America Act. Their remarks also addressed the role of the Department in promoting improved implementation of the newly enacted laws--notably increased efforts to assist the field through guidance, performance monitoring teams, comprehensive technical assistance centers, technology and other means. Department staff also highlighted the agency's strategic plan and intended efforts to link program evaluations to performance indicators. A large portion of the meeting was devoted to discussing the panel's role and substantive areas that it would address. Given the scope of its mandate, the panel chose to begin by identifying key questions/issues to consider, focusing on the priorities identified through the National Assessment of Title I (Sec. 1501), which also apply--for the most part--to examinations of other ESEA programs and Goals 2000. Discussions of the substantive topics led to comments, questions, concerns, and shared ideas regarding the evaluation of the key issues. Critical concerns include: The timing of the assessments and the difficulty of showing long-term outcomes in the short-term. To what extent should the evaluation questions be framed around whether they can be answered by 1998, when the assessment reports are due? The role of evaluations in providing feedback on the extent to which the new federal legislation being appropriately implemented at the state and local levels. How will information be made available for mid-course corrections? The reliance on state reports of performance vs. national evaluations. How valid are the benchmarks states are setting for determining progress? Panel members suggested that a second meeting be held in July to discuss strategies for narrowing the key evaluation questions that were addressed, and current as well as proposed evaluations and other data collections needed to gather relevant information. Department staff will send to the panel in advance a draft summary of issues and information sources to inform the second meeting. A summary of the discussion is attached. Authority for and Purposes of the IRP The Elementary and Secondary Education Act (ESEA), as amended by the Improving America's Schools Act of 1994 [IASA] (P.L. 103-381), mandates that the U.S. Department of Education (ED) conduct a National Assessment of Title I (Sec. 1501) and a comprehensive evaluation of the impact of federal programs on state and local elementary and secondary education reform efforts (Sec. 14701). Both mandated activities require that ED convene an independent panel of researchers, state and local practitioners, and other appropriate individuals to inform ongoing planning, analyses, and review of evaluation activities and data collections to address the mandates, as well as a synthesis of findings. Given the common issues across both assessments and the need to ensure that appropriate and consistent linkages are made during the examination of federal programs, ED established one panel to address both mandates. The purpose of the panel is to advise ED on the structure of evaluation activities, the coordination of data collections across multiple studies, and the synthesis of findings from the various component studies developed to inform both assessment activities. The panel will also help interpret evaluation findings, policy implications, and recommendations for future reauthorizations. The panel's purview includes programs authorized under the ESEA, as amended by IASA, the Goals 2000: Educate America Act, and the School-to-Work Opportunities Act of 1994. The following is a summary of the discussions held during the IRP's first meeting on May 18-19, 1995. Summary of Discussion ED Staff Remarks In their remarks, Ms. LeTendre, Mr. Fagan, Asst. Secretary Payzant, and Mr. Cohen all emphasized that the U.S. Department of Education (ED) will be assessing the new federal legislation's implementation at the state and local levels and the extent to which it supports education reform efforts. Regarding the national assessment, ED wants the evaluation of Title I and other federally funded elementary and secondary education programs to convey information that is "meaningful, good, and understandable to many people" in order to improve programs in an ongoing way. Overall, ED staff urged the IRP to recognize that their mission, compared with that of their predecessors, is more complex because they must assess how an array of federal programs funded under the ESEA, as well as Goals 2000, and School-to-Work legislation, is helping to support and promote education reform in this country. To provide the IRP with some context within which to begin thinking about the design and scope of the national assessment, ED staff described some state and local concerns about the new federal legislation as well as some changes to Department policies and procedures to help support the implementation of the new federal laws: States are finding that one of their most difficult tasks, particularly for those that do not yet have standards or assessments in place, is setting benchmarks for judging whether their reform efforts are moving on track. In the short run, there will likely be no benchmarks for measuring education reform progress among the states. ED has revised many of its policies and procedures in a concerted effort to support comprehensive education reform at the state and local levels. Specifically, ED has: - Shifted the focus of its program review process away from strict compliance monitoring and toward a helpful critique of whether and how states are using federal dollars to support education reform. ED has invited program monitors from Title I, School Improvement Programs, Eisenhower Math and Science, Safe and Drug-Free Schools, Migrant Education, Indian Education, Goals 2000, and Impact Aid programs to participate on integrated review teams. The teams will approach their review of federally funded programs in a way that assists states and local education agencies (LEAs) in using their federal dollars to better support education reform. - To the extent it can, under statute, replace regulations with practical guidance that promotes effective and innovative practice and policy. - In addition, ED plans to use evaluation data not just for accountability purposes but also to help states and local school districts improve upon their reform efforts. For example, evaluation data will be used to help educators identify effective policies and practices and correct problems of implementation as they arise. Finally, ESEA, as amended by IASA, reconceptualizes the provision of technical assistance to states and local school districts. Considered an "essential ingredient" of the overall strategy of the IASA to improve programs and provide children opportunities to meet challenging standards, the law authorizes the establishment of 15 comprehensive regional assistance centers. The centers are intended to move away from offering program-specific assistance and toward assistance that "helps integrate into a coherent strategy for improving teaching and learning the various programs under this Act with state and local programs and other education reform efforts" [Sec. 13001(4)]. In addition, technology will be used to coordinate and provide information and assistance in a cost-effective way. Each of the 15 centers are required to give priority to serving schoolwide programs, and LEAs and Bureau-funded schools with the highest percentages or numbers of children in poverty. In his opening remarks to the IRP, Dr. Alan Ginsburg described the following technical and practical issues that the panel must consider when weighing their design options for evaluating the effects of Title I and other federal elementary and secondary education programs on state and local education reform. In particular, he urged the panel members to consider the following in setting priorities for their work. In the event that the longitudinal study of Title I uses independently administered assessments for Title I students (rather than rely on test scores from the states and local school districts), the panel will need to find a mutually agreeable instrument, such as the National Assessment of Educational Progress (NAEP). To the extent it can, ED wants the national assessment to focus on low-income schools. ED wants to coordinate its evaluation activities with those of other agencies and organizations studying education reform. ED is obligated to respond to the Government Performance and Results Act, which requires every federal government department and agency to develop a strategic plan and performance indicators by 1999. IRP Members' Comments, Questions, and Suggestions Many IRP members were interested in understanding how reform ultimately affects the achievement of students. Many, for example, said that to identify how the new federal legislation supports or impedes education reform, they must understand how it works at the classroom level. Nearly all of the panel members, for example, are interested in issues regarding professional development and teacher instruction. Specifically, many want to know how, if at all, the new legislation affects classroom practice. Others are interested in equity issues and how standards- based reform fosters equal educational opportunities for all students. Through the course of the meeting, IRP members expressed the following general concerns about evaluating the effects of Title I and other federal elementary and secondary education programs on education reform. In addition, the panel discussed some of the broad-based research questions to which it was interested in finding answers: Questions pertaining to the role of the IRP and its evaluation agenda: What are the purposes of this panel? What are the overall purposes of the assessment about which the panel is giving advice? What studies have been proposed thus far? Given the time and funding constraints, should the panel just do case studies of reform in a few states and districts? Who is the audience for the assessment? Congress? State and local program administrators? Classroom practitioners? What information is more salient for some audiences than for others? In the absence of definitive outcome measures, what evidence does Congress need to stay the course? To what extent should the research questions be framed around whether they can be answered by 1998? Cross-cutting evaluation questions and issues: What about the new legislation has worked? What has not worked? Why? Did states and localities do what the legislation intended? Is the new federal legislation being appropriately implemented at the state and local levels? How important is it to attribute outcomes to funding sources? For example, if Title I dollars disappeared, would it make a difference? Are schools in high-poverty areas getting greater access to resources, as ESEA promotes? Are schools taking advantage of the waiver opportunities offered in the law? If so, are those waivers pursued for purposes of convenience or to help improve student achievement? Regarding content and student performance standards, how valid are the benchmarks states are setting to determine whether things are moving on track? Has enough time passed for people to know what good or valid benchmarks are? How is technology being used in the classroom? Is technical assistance provided across all programs? To what extent is professional development, including that for bilingual education and Indian education, available in all schools? To what extent are people rising to the challenge of change? To what extent do educators at all levels perceive that there is a federal mandate, and thus an opportunity to do something different? How do we help schools develop problem-solving skills? To what extent are various programs (e.g., Title I, Goals 2000, School-to-Work Opportunities, other ESEA programs, Head Start, etc.), coordinated at different levels of the system? Finally, the panel decided to spend part of their Thursday and Friday meetings reviewing Section 1501(a)(2)(B)(I-x) of ESEA to try to narrow the focus of their research questions. Their efforts yielded the following research questions and comments--organized around Section 1501 (a) of ESEA--that respond to the legislative mandate under ESEA as well as address cross-cutting issues relevant to broad-based education reform. Sec. 1501 (a) National Assessment.-- (2) EXAMINATION--The assessment shall examine how well schools, local educational agencies, and states are-- (A) progressing toward the goal of all children served under this title reaching the state's challenging state content standards and challenging state student performance standards; -- Is student achievement improving? To what extent? Does Title I funding affect student achievement? What are states and localities doing to raise student achievement? -- To what extent are Title I students held to the same standards as non-Title I students? To what extent is student learning progressing toward high standards? -- How do states define adequate yearly progress? To what extent are states defining adequate yearly progress to reflect challenging expectations for student performance? Are states making progress? Are there things that can be done through technical assistance or policymaking that can help speed things along? -- Which students are making how much progress? (disaggregation) Comments: One panel member expressed his desire not only to talk about where we are as a nation in terms of student achievement, but also talk about where we are along a continuum in implementing a vision of reform that is suggested by the new law: "There's much potential in the new law, yet we run the risk of not realizing that potential because we have some narrow way of looking at and evaluating it. By 1998, [when the final report of the National Assessment is due], states may just be getting to a place where they can begin to implement the kind of changes that the law promotes." Another IRP member suggested that the panel focus on looking at how states define adequate yearly progress and determine whether it is adequate. If it is not deemed adequate, how can the panel and/or ED help states establish higher standards? "Part of what has to come out of this process is an education for lawmakers that helps them understand that they have initiated a process that has some promise of being a great process, one that can be refined or that needs to be modified, and in some places shows promising results. To me, I think that's the best we can do." (B) accomplishing the purposes set forth in section 1001(d) to achieve the goal described in paragraph (1), including-- (I) ensuring challenging state content standards and challenging state student performance standards for all children served under this title and aligning the efforts of states, local educational agencies, and schools to help such children reach such standards; -- To what extent are states establishing challenging content and student performance standards? In what subject areas? -- How are content and student performance standards being set? Are they voluntary or mandatory? Who is involved (e.g., state legislators, state administrators, principals, teachers, parents, etc.) in setting the standards? -- To what extent do parents, teachers, students, and others know about and understand the content and student performance standards? -- What is the balance between higher standards and basic standards of student achievement? -- Can programs, demands, and policies that are stressful to teachers be subtracted in order to support and promote their transition to and adoption of new content and student performance standards? To what extent (and) in what ways? -- How do different states, localities, and schools align their efforts, if at all, to help children reach high standards? -- What process achieves an alignment of efforts to help children reach high standards? Who is involved? -- How do policymakers, administrators, and practitioners at different levels of the education system perceive alignment? -- Does alignment foster student learning? What is the effect on student achievement? Comments: One panel member cautioned that studying alignment will be difficult because many states are just beginning to invent the kinds of things that alignment might need: "I don't think it's reasonable, especially by 1998 when the final report of the National Assessment is due, to expect [ED] to report on the effects of alignment. At best, we could probably begin to understand what alignment begins to mean in practice." The panel member encouraged the group to think about ways of asking administrators what they are doing in regard to alignment. He explained, however, that for alignment to have instructional validity, it would have to look closely at a lot of very small things and the relationships among them: "Those relationships are not on anyone's plate today." Another IRP member added that the panel should not expect to have a quantitative answer that's going to say, for example, 60 percent of professional development is aligned with the curriculum: "Instead, we ought to get a picture of the extent to which alignment is occurring--part of which may have to do with perceptions about alignment." Finally, a third panel member suggested that there are some parts of measuring alignment that could be quantitative. Thirty percent of teachers, for example, could say that some things have changed: "There are quantitative pieces to this." (ii) providing children served under this title an enriched and accelerated educational program through schoolwide programs or through additional services that increase the amount and quality of instructional time that such children receive; (iii) promoting schoolwide reform and access for all children served under this title to effect instructional strategies and challenging academic content; -- To what extent do students receive enriched and accelerated instruction? -- What is the quality and quantity of instructional time students receive? On what activities is instructional time spent? -- To what extent are schools shifting from remedial to accelerated instructional interventions? -- What is the evidence of effective instruction? -- To what extent are schools taking advantage of the new flexibility in the federal legislation? Are more schools becoming schoolwide programs? Are schools developing new approaches to operating schoolwide programs? -- What are students', parents', and teachers' perceptions of schoolwide program schools? -- How are schools interpreting the definition of 'schoolwide programs' and what are the consequences of those interpretations? -- How do within-school resource allocations affect how schools construe schoolwide programs? -- On what basis do eligible schools decide to become schoolwide program schools? What is the decisionmaking and planning process? Language Minority Students -- Are more language minority students served through the new law? -- Are the services provided to language minority students more educationally meaningful? -- Is there a shift in resources to support or weaken services to language minority students? Are Title I resources supplanting state- or locally funded programs for language minority students? Comments: One IRP member suggested that the panel look at the causal link between what schools did to change the infrastructure and what effect that had on student performance. Alan Ginsburg suggested that the panel separate the questions of what happened and why and noted that the same study probably will not answer both: "You need to know how to judge the quality of instruction, parent involvement, professional development, and the like. How you causally link them is a separate question. If you try to do everything in one study, it's going to break down." (iv) significantly upgrading the quality of the curriculum and instruction by providing staff in participating schools with substantial opportunities for professional development; -- Are schools, districts, and states providing professional development opportunities to teachers? Who decides what professional development is provided to teachers? -- What are the perceived avenues for professional development (e.g., in the process of teachers' work)? -- How much non-instructional time is provided for teachers' professional development? -- To what extent does teacher participation in professional development activities disrupt student instruction (e.g., by using substitutes)? What happens in the classroom when teachers are away? -- How do various professional development activities affect teachers' understanding and implementation of content and student performance standards? -- Who are teachers learning from? -- Are professional development plans focused on the school or the individual teacher? -- To what extent and in what ways do schools evaluate professional development activities? -- Did the new legislative requirements about professional development affect the quality or availability of professional development opportunities for teachers? Comments: Given the vastness of the professional development field, one panel member wondered whether the legislation makes certain issues pertaining to professional development more important than others. Dr. Ginsburg asked the panel to decide whether it wants ED to synthesize the research being done in this and other areas and identify gaps in the knowledge base: "With some proposal from you [the panel], ED could work with other agencies in some way to begin assessing some of these questions." (v) using and evaluating the usefulness of opportunity-to-learn standards or strategies in improving learning in schools receiving assistance under this part; -- How are opportunity-to-learn (OTL) standards defined and where are they included in state plans? Where are OTL standards waived? -- What strategies are states using to promote OTL standards? -- How do OTL standards compare across districts? How do OTL standards compare among classrooms? -- Who is providing the opportunity to learn? What are the incentives for more able teachers to work with disadvantaged students? Are teachers NBPTS certified? -- How are states and local school districts dealing with new school improvement requirements? -- What is the SEA and LEA capacity to assist with schoolwide programs and targeted assistance schools? -- What constitutes an opportunity to learn? Comments: One panel member stressed the need to examine OTL within each part of the statute because it's not a discrete issue. It was suggested that Dr. Andy Porter's work on OTL be used to help guide the panel's thinking about questions pertaining to studying OTL standards. Another panel member asserted that it is stated in the beginning of the new law that all children are equal and should have the same instructional strategies available to them: "If a school has a good Title I plan, we should be able to determine whether they have appropriate OTL standards. If they only talk about reading, writing, and arithmetic, with no great strategies to carry out their instruction, then they really don't have good OTL standards." A third panel member cautioned against relying on state plans to assess the impact of federal legislation on education reform: "Schools write a lot of stuff down [that they never do]." A fourth IRP member questioned the need for or utility of evaluating OTL standards: "It's going to be difficult to measure OTL standards when most states are saying they don't need them." To this panel member, the interesting question is determining who is providing the opportunities: "Studies show that the most experienced teachers are working with students who are not disadvantaged. Maybe we should look at the extent to which that has changed [with the introduction of the new federal laws]." (vi) coordinating services provided under all parts of this title with each other, with other educational and pupil services, including preschool services, and, to the extent feasible, with health and social service programs funded from other sources; -- Are the opportunities to coordinate services, available under the law, used? (ED Performance Indicator) -- To what extent do state and local policies, programs, and practices reflect increased coordination? -- Are students who are enrolled in schools that coordinate services learning more? -- To what extent are all parts under Title I coordinated (especially migrant education programs, and N or D)? For example, are more migrant children served? Is coordination promoted at all levels of program administration? -- How do different programs work within different schools? For example, to what extent do special education and Title I programs work together? -- Is there greater access to services? -- Is there less duplication of services? -- Is there a seamless delivery system? -- What is the impact of high school targeting on coordination within ESEA? -- Are preschool services coordinated with Title I? -- How is "glue money" used? -- How are other health and social services tied in and does tying them in make a difference? (Are there any other studies on this topic?) Comments: Bayla White drew the panel's attention to the fact that to answer some of the questions regarding coordination of services, the panel will need to look at a different set of schools than those traditionally selected for Title I studies. That is, to know how migrant education services are linked with Title I services, the panel needs a study that will go into LEAs where migrant populations exist. (vii) affording parents of children served under this title meaningful opportunities to participate in the education of their children at home and at school, such as the provision of family literacy services; -- To what extent are schools involving parents in the education of their children? To what extent are parents active partners in the education of their children? -- To what extent do schools receive feedback from parents? -- To what extent do parents know about content and student performance standards, parent compacts, etc.? -- To what extent do parents feel comfortable and welcome in their children's school? -- To what extent are the needs of parents of language-minority students addressed, particularly family literacy needs? -- Do Title I parent advisory councils coordinate with other parent councils or organizations? Comments: One panel member pointed to some potential sources of information on aspects of parent involvement, including customer surveys and the National Evaluation of Even Start. Another panel member wondered how studies can get at the parents: "So many studies ask the schools." A third member of the IRP said her school got a 22 percent response rate on a survey of parents when they used a private company in Philadelphia, PA to conduct the survey. (viii) distributing resources to areas where needs are greatest; -- What is the impact of within-district targeting? -- How are resources being combined and expended in schoolwide programs? What is the impact of combining resources? -- To what extent are people coordinating resources at all levels of program administration (particularly the use of discretionary dollars)? -- To what extent are LEAs spreading resources as far as possible versus targeting resources to highest poverty schools? -- Are LEAs targeting children or high poverty schools? -- How has the provision of resources to schools changed with the new law? To what extent do administrators view the legislation as allowing other opportunities or ways to distribute resources? -- What is happening with state and local resources? How are funds shifted (e.g., state compensatory education may supplant lost funds)? -- What is the impact of declining revenue, increased flexibility, and increased targeting on services to disadvantaged children? -- What is the impact of the targeting provisions of the legislation on different LEAs? -- What processes do schools use to determine how resources will be used? What is the rationale behind their decisions and does that rationale support advancing student achievement? Comments: Dr. Ginsburg noted that there's a difference between tracking general within-school resource allocations and tracking the specific allocation of Title I dollars. He explained that while evaluations may not be able to describe the Title I portion that's funding teachers, they can still identify how schools allocate their total resources among staff, supplies, the physical plant, etc. "When you tag the dollars," Dr. Ginsburg explained, "you run into more problems [in trying to separate out how they affect students and schools]." One IRP member stressed the importance of determining the extent to which districts target more resources to high-poverty schools: "Even though there's opportunity to target more funds to high poverty schools, there's also opportunity for LEAs to spread the funds across a larger number of schools. We need to determine the extent to which LEAs with high poverty schools are falling prey to the political temptation to spread money as far as possible in order to please the greatest number of people." A second panel member pointed out the tension that has resulted from the targeting provisions that were enacted under Title I, which conflicts with certain programmatic requirements. The provisions, proposed by the Administration, emphasized increased targeting to higher poverty districts and schools in order to spread funds less thinly. The final legislation, however, allows funds to be spread across a greater number of districts, leaving fewer dollars for high-poverty districts. Thus there will be schools, particularly in larger urban areas for example, with 50 percent of students eligible for free- or reduced price lunch--which will not be served. Another panel member explained that there are enormous pressures on school boards and LEAs from school administrators, teachers, and parents to allocate resources to them. "My big fear is that school boards will feel they have to do the same thing for everybody--that's what equal opportunity means for some people." A fourth IRP member commented that the changes in Title I (e.g., increased flexibility, increased targeting) puts lots of pressure on school districts to spread the funds out. "Getting at some of those dynamics will be difficult, but very important." On a practical note, another panel member warned that it has become more difficult in recent years to gain access to urban districts due staffing cutbacks and consequent workloads that prevent administrators from hosting researchers' visits. (ix) improving accountability, as well as teaching and learning, by making assessment under this title congruent with State assessment systems; -- Are Title I assessments linked to state assessments? How does Title I build on state accountability systems? Are systems designed to be amendable to Title I? [Note: Need accurate information from states] -- How are assessment results used (e.g., to evaluate schools or groups of children)? -- For states in transition, how are schools in need of improvement identified? How is progress measured? Does the measure of when schools are in need of improvement reflect some high standard of progress? -- What are state incentives, rewards, and sanctions regarding student performance? -- Which students are being tested? Are students in greatest need of services (e.g., LEP students) being exempted from testing? -- What assessment systems have states designed? What operations are in place (e.g., like Kentucky)? Comments: Although the discussion was relatively brief, many panel members expect a number of other accountability questions to emerge over time. Nevertheless, one IRP member suggested that perhaps the best way to deal with this issue is to not take on the whole accountability system. (x) providing greater decisionmaking authority and flexibility to schools in exchange for greater responsibility for student performance. -- To what extent do schools have greater decisionmaking authority, especially over allocating resources? Comments: One panel member said that, in his view, the law seems to convey the notion that schools will take their funding allocation and determine the best way to use it to meet the needs of children. To what extent, he asked, is that happening? Are funding allocations, in reality, being determined by a person in the district office who, paying no attention to relative need, spreads the resources evenly among schools?