Washington Statistical Society on Meetup   Washington Statistical Society on LinkedIn

Washington Statistical Society Seminars 1999

January 1999
6
Wed.
Estimating the Risk of an Outcome when the Outcome is Measured with Uncertainty
8
Fri.
On the Stochastic Approach to Index Numbers
13
Wed.
Data Warehousing at the National Agricultural Statistics Service
19
Tues.
Researcher Access toAssisted-Housing Household Data: The Process at HUD
19
Tues.
Assessing Software Reliability: How Accurate is Econometric/Statistical Software?
25
Mon.
Improving Survey Quality: Can We Develop Better Measures of Quality and Will They Help?
February 1999
16
Tues.
Issues in Web Data Collection
17
Wed.
Electricity Restructuring: Opportunities to Exercise Market Power?
22
Mon.
How to Increase Performance at Lesser Cost in Government
March 1999
16
Tues.
Parametric and Semi-parametric Regression Estimation from Complex Survey Data
17
Wed.
Determining an Allocation of Trainers and Field Representatives to Training Sites and Start Times
31
Mon.
Using a Theory of Survey Participation to Design an Interviewer Training Protocol in Establishment Surveys
April 1999
8
Thur.
Collecting and tabulating race and ethnicity data: Discussion of the OMB provisional guidelines
9
Fri.
To Sample or Not to Sample? Why is That the Question For Census 2000?
21
Wen.
Telephone Surveys: Past, Present, and Future -- Highlighting the Contributions of James T. Massey
26
Mon.
Permutation Tests for Joinpoint Regression with Applications to Cancer Rates
May 1999
11
Tue.
Conference: Data to Decisions: New Age Information, Private Strategies, Public Policies
12
Wed.
Giving Users What They Really Need
18
Thur.
How To Identify Hard-To-Count Areas With The Planning Database
19
Wed.
Developments in the American Community Survey
26
Wed.
Long Form Estimation In A Census
June 1999
3
Wed.
Some Aspects of Social Security Reform
3
Wed.
Total Survey Question Design: What Does It Mean? What Are the Implications for Question Evaluation?
9
Wed.
Desktop Hosting of Web-Based Training: Back to Small, Local, and Personal
10
Thur.
Respondent Incentives in Surveys: A Fresh Look
16
Wed.
Choosing a Variance Computation Method for the Revised Consumer Price Index
17
Thur.
Census Household Survey Redesigns
18
Fri.
Quality in the Government: USPS Quality Process for Improving Timely Delivery
22
Fri.
Planning for the Early Childhood Longitudinal Study -- Birth Cohort 2000 (ECLS-B)
September 1999
1
Wed.
Correcting estimated relative risk for dietary measurement error: Have we been misleading ourselves?
9
Thur.
Event History Analysis of Interviewer and Respondent Survey Behavior
23
Thur.
Balancing Confidentiality and Burden Concerns in Surveys and Censuses of Large Businesses
30
Thur.
The American Community Survey: A Status Report for Statisticians
October 1999
6
Fri.
Membership Functions and Probability Measures of Fuzzy Sets
6
Wed.
Saving for a Rainy Day: Does Pre-Retirement Access To Retirement Saving Increase Retirement Saving?
12
Tues.
Variance Estimation 1: Accounting for Imputation
(first of two-part series)

26
Tues.
The 1998 Morris Hansen Lecture
Diagnostics for Modeling and Adjustment of Seasonal Data
November 1999
3
Wed.
Rethinking Historical Controls
4
Thurs.
Variance Estimation: Effects of Weight Adjustments (second of two-part series)
9
Tues.
Using Auxiliary Data to Improve the Survey Frame and Reduce Survey Costs
10
Wed.
Policy Options for Reducing Poverty Among Elderly Women
16
Tues.
A Bayesian Exploratory Data Analysis of Spatio-Temporal Patterns in U.S. Homicide Rates
19
Fri.
A Cognitive Study of Methods of Classifying Epidemiological Data on a Series of Choropleth Maps
19
Fri.
The Atlas of Cancer Mortality in the United States, 1950-94 and Its Associated Web Sites
23
Tues.
Estimation & Variance Estimation in a Standardized Economic Processing System
December 1999
2
Thur.
Estimation from Linked Surveys
8
Wed.
Statistics and a Digital Government for the 21st Century
15
Wed.
Survey Design & Analysis for Hierarchical Linear Models

WSS Home | Newsletter | WSS Info | Seminars | Courses | Employment | Feedback | Join!


Topic: Estimating the Risk of an Outcome when the Outcome is Measured with Uncertainty

  • Speaker: Larry Magder, Department of Epidemiology and Preventive Medicine, University of Maryland at Baltimore.
  • Date/Time: Wednesday, January 6, 1999 at 11:00 am
  • Location: Conference Room G, Executive Plaza North (EPN), 6130 Executive Blvd, Rockville, MD
  • Sponsor: NCI

Abstract:

Often, in biomedical research, where there is scientific interest in estimating the association between predictors and a binary outcome, the binary outcome is measured with uncertainty. For example, clinical outcomes are often assessed with diagnostic tests with imperfect sensitivity and specificity. Or, as another example, outcomes of a smoking cessation program might be assessed based on self-report, with accuracy subject to some doubt. In this talk, it is shown that a likelihood approach can be used to incorporate assumptions about the accuracy of the outcome measurement in the estimation of associations with predictors. Closed-form formulas for estimating odds ratios and risk ratios from 2-by-2 tables will be presented. An E-M algorithm for fitting logistic regression models with uncertain outcomes will be described. An extension of the method to the situation when there are multiple imperfect measurements of an outcome of interest will also be described. Several examples will be presented and the results of the method will be compared to those obtained using methods which ignore the imperfect measurement. A SAS macro which implements the method will be made available. Return to top


Topic: On the Stochastic Approach to Index Numbers

  • Speaker: Erwin Diewert, University of British Columbia, Vancouver
  • Discussant: Alan H. Dorfman, Bureau of Labor Statistics
  • Chair: Stuart Scott, Bureau of Labor Statistics
  • Date/Time: Friday, January 8, 1999; 12:30 - 2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC, (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202- 606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section

Abstract:

This presentation reviews the recent rebirth of the stochastic approach to index number theory. The earlier contributions of Jevons, Edgeworth, and Bowley are also reviewed. The stochastic approach treats each price relative as an estimate of the amount of inflation between the base period and the current period. By averaging these price relatives, a more accurate estimate of price change is obtained and a confidence interval for the amount of inflation can be derived. Four criticisms of the stochastic approach are presented and assessed, including, most basically, Keynes' assertion that price relatives must be weighted according to their economic importance. The test or economic approaches to index number theory provide a basis for determining the precise nature of the weights. Return to top


Title: Data Warehousing at the National Agricultural Statistics Service

  • Speakers: Mickey Yost and Jim Cotter, National Agricultural Statistics Service
  • Chair: Mike Fleming, National Agricultural Statistics Service
  • Date/Time: Wednesday, January 13, 1999, 12:30-2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line--Union Station).Enter at 1st Street and Massachusetts Avenue. Please call Karen Jackson at 202-606-7524 or send e-mail to jackson_karen@bls.gov to be placed on the visitor's list.
  • Sponsor: Statistical Computing Section
Abstract:

Each year, several thousand data files containing agricultural survey data from farmers, ranchers, and agri- businesses are generated on different platforms, using different software systems, and different data definitions or metadata. To answer the Agency's strategic need to improve its entire statistical program by making use of this rich, but not readily accessible, store of historical data, NASS built a Data Warehouse of its census and survey micro-data.

In July 1998, close to 600 NASS employees were accessing the Agency's Data Warehouse. The access was simplified by a database schema containing only seven tables--a central data table containing all the survey and census data, and six corresponding dimension tables that provide all the necessary information, or metadata, to access the data readily. It was the choice of a dimensional model (star schema) that makes access to well over 300 million rows in our Data Warehouse fast and easy. Queries, such as retrieving anyone in a survey reporting positive corn production, are returned from the Data Warehouse in less than five seconds.

NASS will use the Data Warehouse to serve the historical data requirements of all our survey and census programs so that future statistical procedures and survey decisions are based on all available data (our inputs) to improve the quality of our agricultural statistics (our outputs or product).

e-mail: myost@nass.usda.gov and jcotter@nass.usda.gov Return to top

Title: Researcher Access to Assisted-Housing Household Data: The Process at HUD

  • Speaker: David Chase, U.S. Department of Housing and Urban Development
  • Date/Time: Tuesday, January 19th, 12:30-1:45
  • Location: Bureau of Labor Statistics (BLS),Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line Union Station). Use the First Street, NE entrance. Please send e-mail to Karen Jackson at least two days before the talk to be placed on the visitor's list and bring a photo ID: jackson_karen@bls.gov
  • Sponsor: The Washington Statistical Society

Abstract:

The Department of Housing and Urban Development (HUD) seeks to expand the research and policy analysis of the characteristics of recipients of assisted housing. To accomplish this, HUD has developed procedures to release household level data to researchers. While individual households are not directly identified, thecontent of the data possibly could be matched with other data sources to identify individuals. For this reason, the researchers receiving the data must agree not to release the data to others. Also, statistical disclosure limitation techniques are applied to the data to reduce the possibility of identification. This presentation will describe the process HUD went through to make these data available. This includes: 1)achieving internal agreement on the appropriate level of detail; 2) preparing a routine use statement for public comment; 3) testing the actual data for identifiability; and 4) applying statistical disclosure limitation techniques to mask identities. Return to top


Topic: Assessing Software Reliability: How Accurate is Econometric/Statistical Software?

  • Speaker: B. D. McCullough, Federal Communications Commission Chair: Linda Atkinson, Economic Research Service
  • Day/Time: Tuesday, January 19, 1999, 12:30-2:00 p.m.
  • Location: Waugh Auditorium B, 1800 M Street, N.W., Washington, DC (between Farragut North and Farragut West subway stops). Enter the South Lobby and take the elevator to the 3rd floor. Email Linda Atkinson (atkinson@econ.ag.gov) or call (202) 694-5046 by January 15 to be placed on the visitors' list for the guard's desk.
  • Sponsor: Economics Section and Statistical Computing Section

Abstract:

In the current issue of The American Statistician (November, 1998), McCullough proposes a methodology for assessing the reliability of statistical software on three fronts: estimation, random number generation, and statistical distributions (e.g., for calculating p-values). Results of applying this methodology to several packages will be presented: SAS, SPSS, S-PLUS, EViews, LIMDEP, SHAZAM, TSP, and EXCEL. No package scores perfectly, and some packages are decidedly better than others.

Time permitting, additional benchmark results will be presented for: Wilkinson's Tests; and the Fiorentini-Calzolari-Panattoni benchmark for GARCH estimation. Again, some packages are decidedly better than others. Return to top


Topic: Improving Survey Quality: Can We Develop Better Measures of Quality and Will They Help?

  • Panel: B.K. Atrostic, U.S. Bureau of the Census
    Chet Bowie, U.S. Bureau of the Census
    Connie Citro, Committee on National Statistics
    Pat Doyle, U.S. Bureau of the Census
  • Chair: Chet Bowie, U.S. Bureau of the Census
  • Date/Time: Monday, January 25, 1999; 12:30 - 2:00 p.m.
  • Location: Conference Center, Meeting Room 2, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC. (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson(202-606-7524) at least 2 days before talk to beplaced on the visitor list, and bring photo ID.
  • Sponsor: Quality Assurance Section
Abstract:

The 2000 census count is fast approaching and faces a host of interrelated technical, political and legal Statistical organizations strive continuously to improve the quality of their surveys. Quality however, can be hard to measure. Some of the dimensions of quality most important to customers and data users are often the most elusive to quantify. Changing survey practices may also produce new dimensions of quality that require new measures. What measures do statistical organizations use to assess how successful they are in improving quality? Are better measures needed to improve the feedback organizations require to monitor performance? Do policy makers, sponsors, and data users find the measures useful? Both U.S. and other national statistical are reviewing or developing more formal quality assurance frameworks, or have developed programs requiring innovative quality measures. The speakers will discuss their experiences in defining dimensions of survey quality; developing consistent, timely, and new measures; and using the measures to monitor and improve their surveys. Return to top

Topic: Issues in Web Data Collection

  • Speakers: Nancy Bates and Elizabeth Nichols (U.S. Bureau of the Census)
  • Date/Time: Tuesday, February 16, 1999, 12:30 to 2:00 p.m.
  • Where: BLS Conference and Training Center, Rooms 9 and 10,PostalSquare Building, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least two days before talk to be placed on the visitor's list and bring photo ID.
  • Sponsor: Data Collection Methods Section
  • Co-Sponsor: Washington/Baltimore Chapter of the American Association for Public Opinion Research
Abstract:

(1) The Census Bureau WWW Hiring Questionnaire: A Case Study Testing. The successful migration of a paper-based form to a web-based application requires more than simply paper questionnaire into a computerized instrument. Frequently, a new web survey requires designers to rethink question ordering, layouts, formats, question wordings . Usability testing of the new application is equally important and timing is critical for the testing to effectively diagnose and fix problems. Ideally, usability testing should occur before the formal design stage, again during the initial prototype stage, and then again during a retest stage. In August of 1997, the Census Bureau moved its hiring system for three major job series to the USAJOBS Internet site maintained by the Office of Personnel Management (OPM). Anon-line questionnaire was a critical component of the new system and was implemented by adapting a `bubble' paper form to a web questionnaire. Prior to the switch, no usability testing took place. Using this as a case study, we examine some problems encountered with the new application, the steps taken to remedy them, and some performance measures to evaluate whether the revised application worked better after usability testing was made a part of the redesign effort.

(2) Economic Data Collection via the Web: A Census Bureau Case Study. In April 1997, the Census Bureau conducted a proof-of-concept study of collecting data via the Web. A Computerized Self-Administered Questionnaire (CSAQ) was developed in HTML/JavaScript and was used by 50 companies in the 1996 Industrial Research and Development (R&D) Survey. Prior to implementing the study, a paper screener questionnaire was sent to all companies in the R&D panel. This paper screener requested information about interest in reporting via the Web and the ability of the company to do so. Screener results and results from electronic Web CSAQ are examined as well as the methodology and usability findings of this proof-of-concept system. Return to top

Topic: Electricity Restructuring: Opportunities to Exercise Market Power?

  • Speaker: Robert Eynon, Energy Information Administration
  • Discussant: William Meroney, Federal Energy Regulatory Commission
  • Chair: Brenda G. Cox, Mathematica Policy Research, Inc.
  • Date/Time: Wednesday, February 17, 1999, 12:30 pm to 2:00 pm
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, Massachusetts Avenue, NE, Washington, DC (Metro Red Line - Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least 2 days before the talk to be placed on the visitor list and bring photo ID.
  • Sponsor: Methodology Section, Economics Section
Abstract:

Electric utilities-one of the largest remaining regulated industries in the United States-are in the process of transition to a competitive market. Now vertically integrated, the industry will in all probability be segmented at least functionally into its three component parts: generation, transmission, and distribution. The proposals and issues are being addressed in Federal and State legislation and are being debated in State regulatory hearings.

These efforts to open electricity markets to competition may offer opportunities for suppliers to exercise market power as electricity services are unbundled. This talk will provide the impetus for and current status of restructuring. It will the results of a case study of the New England transmission system that examines how the characteristics of the physical network, the laws of physics that determine electrical flow, and the institutional structure meld to determine outcomes. Results of alternative cases will be presented to show how market power could exist in restructured electricity markets. Return to top

Title: How to Increase Performance at Lesser Cost in Government

  • Speaker: Dan Curtis, The U.S. Postal Service
  • Chair: Amrut Champaneri, U.S. Department of Agriculture
  • Date/Time: Monday, February 22, 1999, 12:30 to 2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990,2Massachusetts Avenue, NE, Washington, DC (Metro Red Line-Union Station). Use the First Street, NE Entrance. Call Karen Jackson (202)606-7524 at least two days before talk to be placed on the visitors' list and bring photo ID.
  • Sponsor: Quality Assurance Section, WSS


Abstract:

Vice President Gore established the National Performance Review (NPR). One of the goals of NPR was to review, revise and develop service/performance standards in Government. The speaker was involved with NPR for a year and would like to share his extensive experience with NPR.

In particular, he would like to describe the book they published in 1994, and how they encouraged the Government Agencies to implement performance measures, report the results publicly and empower employees to perform better.

Further, some examples will be described to see how the concept is actually working. Return to top

Seminar co-sponsored by National Cancer Institute and
Washington Statistical Society

Topic: Parametric and Semi-parametric Regression Estimation from Complex Survey Data

  • Speaker: Danny Pfeffermann, Department of Statistics, Hebrew University, Israel
  • Chair: Barry Graubard, Biostatistics Branch, National Cancer Institute
  • Date/time: Tuesday, March 16, 1999, 12:30 - 2:00pm
  • Place: National Cancer Institute, Executive Plaza North Bldg., Conference Room J, 6130 Executive Blvd, Rockville, Maryland
  • Transportation: Parking is available at the Executive Plaza North Bldg. By subway, get off at White Flint stop; there is a free shuttle bus at the stop to Executive Plaza North Bldg that runs about every 10-15 minutes, or the walk is about one mile. For more information on directions or access, contact Annette Cunningham, (301) 496-4154.


Abstract:

This talk proposes two new classes of estimators for regression models fitted to survey data. These estimators account for the effect of nonignorable sampling schemes which are known to bias standard estimators. Both classes derive from relationships between the population distribution before sampling and the sampling distribution of the sample measurements. The first class consists of parametric estimators. These are obtained by extracting the sample distribution as a function of the population distribution and the sample selection probabilities and applying maximum likelihood theory to this distribution. The second class consists of semi-parametric estimators, obtained by utilizing existing relationships between moments of the two distributions. New tests for sampling ignorability based on these relationships are also proposed. The proposed estimators and other estimators in common use are applied to real data and further compared in a simulation study. The simulations also enable study of the performance of the sampling ignorability tests and bootstrap variance estimators. Return to top

U.S. Bureau of the Census
Statistical Research Division Seminar Series

Topic: Determining an Allocation of Trainers and Field Representatives to Training Sites and Start Times

  • Speaker: Stephanie Earnshaw, University of North Carolina
  • Date/Time: Wednesday, March 24, 1999, 10:30 - 11:30 a.m.
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call Barbara Palumbo at (301) 457-4974 to be placed on the visitors' list. A photo ID is required for security purposes.
  • Note: This program is physically accessible to persons with disabilities. Requests for sign language interpretation or other auxiliary aids should be directed to Barbara Palumbo (SRD) (301) 457-4974 (v), (301) 457-3675 (TDD).
Abstract:

The fields of social and health research rely heavily on surveys to collect information about people's attitudes and behavior. In gathering certain types of information, face-to-face surveys offer some advantages over other modes of data collection. However, these surveys are often substantially more costly than other modes. The training of field representatives (FRs) makes up a large proportion of these face-to-face survey costs. This task can account for anywhere from 10 percent to 30 percent of the total cost of the survey.

Using operations research techniques can help minimize these and other costs related to the management of survey operations. Operations research techniques have been used to a great extent by the military, manufacturing, and healthcare organizations. However, these methods have not been extensively applied in the field of survey research. This paper explores the potential use of operations research methods in determining training sites and allocating FRs and trainers such that costs are minimized. Return to top


U.S. Bureau of the Census
Statistical Research Division Seminar Series

Title: Using a Theory of Survey Participation to Design an Interviewer Training Protocol in Establishment Surveys

  • Speaker: Robert M. Groves
    University of Michigan and Joint Program in Survey Methodology
  • Date/Time: Wednesday, March 31, 1999, 10:30 - 11:30 a.m.
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call Barbara Palumbo at (301) 457-4974 to be placed on the visitors' list. A photo ID is required for security purposes.
  • Note: This program is physically accessible to persons with disabilities. Requests for sign language interpretation or other auxiliary aids should be directed to Barbara Palumbo (SRD), (301) 457-4892 (v), (301) 457-3675 (TDD).
Abstract:

A research program in the 1990's led to the formulation of a conceptual structure describing the process of survey participation. The theoretical principles asserted that the interviewers' effectiveness in recruitment of sample persons was enhanced to the extent that they customized their presentation of the survey request to the concerns of the sample person. A review of current training procedures for interviewers concluded both that little training time was being spent on recruitment behaviors AND that these theoretical notions were not part of the rationale of most training regimens. Two tests of training protocols were mounted in two establishment survey efforts. The presentation presents the theoretical notions and the measured effects of the training on cooperation rates in the surveys. Return to top

Topic: Collecting and tabulating race and ethnicity data: Discussion of the OMB provisional guidelines

  • Speakers:
    Clyde Tucker, Bureau of Labor Statistics and
    Susan Schechter, National Center for Health Statistics, Centers for Disease Control and Prevention.
  • Chair: Robert Simmons, Defense Manpower Data Center
  • Date/time: Thursday, April 8, 1999, 12:30 - 2:00 p.m.
  • Place: Bureau of Labor Statistics, 2 Massachusetts Avenue, Conference Room 1 , Washington, DC (Metro Red Line - Union Station). Use the First St. NE entrance. To be placed on the visitors' list, e-mail Karen Jackson (jackson_karen@bls.gov), or call 202-606-7524 if you don't have e-mail, no later than Tuesday, April 6.


Abstract:

OMB recently distributed for comment draft provisional guidance on the implementation of the 1997 standards for the collection of federal data on race and ethnicity. The guidance focuses on three areas: data collection, data tabulation, and methods to conduct trend analysis using data collected under both the old and the new standards. An overview of the alternatives suggested in the provisional guidance will be presented and continued research efforts will be described. Return to top


President's Invited Session

Topic: To Sample or Not to Sample? Why is That the Question For Census 2000?

  • Speakers:
    Margo J. Anderson, Wilson Center Fellow and Professor of History & Urban Studies at the University of Wisconsin-Milwaukee; and
    Stephen E. Fienberg, Maurice Falk Professor ofStatistics & Social Science at Carnegie Mellon University
  • Discussant: Ed Spar, Executive Director, COPAFS
  • Chair: Dwight Brock, President, Washington Statistical Society
  • Date/Time: Friday, April 9, 1998, 3:00-4:30 pm followed by cash bar reception at Union Station (place TBA).
  • Location: Meeting Rooms 1 and 2, BLS Conference Center, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC (Red Line, Union Station; Parking). Call Ed Spar (703-836-0404) to be placed on the visitors' list.
  • Sponsor: Statistics and Public Policy Section
Abstract:

Every 10 years, the federal government counts the population, reapportions Congress and state and local legislatures, and uses the population data to plug new numbers in funding formulas for government programs. Census taking is usually one of the least newsworthy of federal government activities, except perhaps for the month or two of intense activity around the April census date. This decade, however, plans for taking the 2000 Census have been front page news. The brouhaha began in earnest two years ago, and shows no sign of abating.

The tangled political and technical issues have provided a flood of news and media coverage, much of it confusing rather than enlightening. Myths on Census taking and the role of sampling abound. Congress, the President and the Supreme Court have all weighed in on the issue, to the dismay, we venture, of the career officials of the Census Bureau who conduct the census. What is an "actual enumeration"? What would it mean to have two sets of data for all 7 million census blocks in the country? And how, in the current closely divided partisan environment of Washington, will the issues be resolved?

Like much in American political life, the country has been through census controversy before, though not recently enough to remember. While the sampling methods proposed for 2000 represented truly new innovations in Census taking; the fact of innovation itself in Census taking is not new. Nor is major contention about the meaning of the data, the methods of apportionment and formula writing, or a stalemated political situation. We hope to sort out some of the old and the new and provide suggestions for 2000. Return to top

Title: Telephone Surveys: Past, Present, and Future --
Highlighting the Contributions of James T. Massey

  • Speaker: Brenda G. Cox, Mathematica Policy Research, Inc.
  • Discussant: Andrew White, National Academy of Sciences
  • Chair: Monroe Sirken, National Center for Health Statistics
  • Date/Time: Wednesday, April 21, 12:30 p.m. to 2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line - Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least 2 days before the talk to be placed on the visitor list and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The past 20 years have seen the telephone become the predominate data collection mode for publicly and privately sponsored surveys. Using papers written by James T. Massey, this presentation describes critical milestones in the evolution of telephone surveys as a valid data collection approach and challenges for the future. Massey and Thornberry were one of the first to document the dramatic improvement in telephone coverage for the U. S. population. This improved telephone coverage together with its low cost led survey sponsors to substitute the telephone for more expensive personal interviews or less reliable mail questionnaires. Simultaneously, researchers initiated investigations designed to identify ways to improve the quality of telephone survey data. This research together with the widespread adoption of computer-assisted telephone interviewing (CATI) in the late 1970's and early 1980's led to continuing improvement in the quality of telephone survey data. The last decade has seen increasing emphasis on reducing the cost of telephone interviews while maintaining data quality. The presentation concludes by presenting future directions for telephone survey research as evidenced by Massey's most recent research. Return to top

Title: Permutation Tests for Joinpoint Regression with Applications to Cancer Rates

  • Speakers:
    Hyune-Ju Kim, Ph.D, Associate Professor, Syracuse University
    Dave Annett, Systems Analyst, Information Management Services
  • Chair: Linda W. Pickle, Ph.D, National Center for Health Statistics, CDC
  • Date/Time: Monday, April 26, 1999, 10:30 - 12 Noon
  • Location: National Center for Health Statistics, Presidential Building , 11th Floor Auditorium, Room 1110 , 6525 Belcrest Road, Hyattsville, Maryland (Metro: Green Line, Prince George's Plaza, then approximately 2 blocks)
  • Sponsor: Public Health & Biostatistics
Abstract:

The identification of changes in the recent trend is an important issue in the analysis of cancer mortality and incidence data. We apply a joinpoint regression model to describe such continuous changes and use the grid-search method to fit the regression function with unknown joinpoints assuming constant variance and uncorrelated errors. We find the number of significant joinpoints by performing several permutation tests, each of which has a correct significance level asymptotically. Each p-value is found using Monte Carlo methods, and the overall asymptotic significance level is maintained through a Bonferroni correction. These tests are extended to the situation with non-constant variance to handle rates with Poisson variation and possibly autocorrelated errors. The performance of these tests are studied via simulations and the tests are applied to U.S. prostate cancer incidence and mortality rates.

The presentation will be in two parts. Hyune-Ju Kim will first present the statistical theory of the tests, then Dave Annett will demonstrate software to perform these types of tests. Return to top

Conference: Data to Decisions: New Age Information, Private Strategies, Public Policies

  • Date/Time: Tuesday, May 11, 1999, 8:00 am - 3:30 p.m.
  • Location: Waugh Auditorium, Economic Research Service, USDA, 1800 M St NW.
  • Sponsors: USDA Economists Group, the AAEA Economic Statistics and Information Resources Committee, the Council on Food Agricultural and Resource Economics and the Farm Foundation.
  • Fee: $25 payable to USDA Economists Group to Don West, ECS/CREES/USDA Room 3337-S, US Dept of Agriculture, Washington DC 20250.
Return to top

Topic: Giving Users What They Really Need

  • Speaker: Philip Bell, Australian Bureau of Statistics
  • Date/Time: Wednesday, May 12, 1999, 1:30 - 2:30 p.m.
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland,Maryland - Room 2113, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call Barbara Palumbo at (301)457-4974 to be placed on the visitors' list.
  • Sponsor: Bureau Of The Census Statistical Research Division Seminar Series ood Agricultural and Resource Economics and the Farm Foundation.
Return to top

Topic: How To Identify Hard-To-Count Areas With The Planning Database

  • Speakers: J. Gregory Robinson and Antonio Bruce, U.S. Bureau of the Census
  • Date/Time: Tuesday, May 18, 1999, 10:30 - 11:30 a.m.
  • Location: 4700 Silver Hill Road, Suitland, Maryland - the Morris Hansen Auditorium, Bldg. 3. Please call Barbara Palumbo at (301) 457-4974 to be placed on the visitors' list.
  • Sponsor: Bureau Of The Census Statistical Research Division Seminar Series.
Return to top

Topic: Developments in the American Community Survey

  • Speakers: Cynthia Taeuber and Chip Alexander, U.S. Bureau of the Census
  • Chair: Graham Kalton, WESTAT
  • Date/Time: Wednesday, May 19, 1999, 12:00 - 1:30 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line-Union Station). Use First ST., NE entrance. Call Karen Jackson (202-606-7524) at least to be placed on the visitor list.
  • Sponsor: WSS Social and Demographic Statistics Section.
Return to top

Topic: Long Form Estimation In A Census

  • Speakers: Speakers: Cary Isaki and Julie Tsay, U.S. Bureau of Census, Wayne A. Fuller, Iowa State Universits
  • Date/Time: Wednesday, May 26, 1999, 10:30 - 11:30 a.m.
  • Location: U.S. Bureau of the Census, 4700 Silver Hill Road, Suitland, Maryland - the Conrad Taeuber Room, Bldg. 3. Enter at Gate 5 on Silver Hill Road. Please call Barbara Palumbo at (301) 457-4974 to be placed on the visitors' list.
Return to top

Topic: Some Aspects of Social Security Reform

  • Speaker: Annika Sunden, Center for Retirement Research at Boston College
  • Discussant: Theresa Devine, CBO
  • Chair: Arthur Kennickell, Federal Reserve Board
  • Date/Time: Thursday, June 3, 1999, 12:30 - 2:00 p.m.
  • Location: Bureau of Labor Statistics, 2 Massachusetts Ave. NE, Room 2990 (Cognitive Laboratory). Enter at Massachusetts Avenue and North Capitol Street (Red Line: Union Station). Visitors outside the BLS, please call Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) at least two days in advance to have your name placed on the guard's list for admittance. Please bring a photo id.
  • Sponsor: Economics Section
Abstract:

This presentation will discuss some aspects of Social Security reform. I will discuss the importance of Social Security for retirement income and present results showing how social security and pensions affect the wealth distribution. With this as a background, the main part of the presentation will focus on Social Security reform proposals.

It has been suggested that the rate of return in the Social Security system can be increased if the trust funds invest some of their assets in equities. Opponents have argued that it is impossible to maintain a large government trust fund and that investments would be made on the basis of social rather than risk and return considerations. The experiences of state and local pension plans shed some light on this debate. State and local pension plans have for a long time invested their assets in equities, and in this talk I will present some results from an analysis of the investment behavior of public pension plans.

Other reform proposals suggest the introduction of individual accounts arguing that this would increase retirement savings. Opponents of this proposal suggest that a system with individual accounts would negatively affect the retirement income adequacy of vulnerable groups as well as imposing high administrative costs. I will discuss these arguments and present results of individual behavior in 401(k) plans focusing on how individuals invest their funds and their knowledge of their pension plans.

Return to top


Topic: Total Survey Question Design: What Does It Mean? What Are the Implications for Question Evaluation?

  • Speaker: Floyd Jackson Fowler, Jr., Center for Survey Research, University of Massachusetts, Boston
  • Date: Thursday, June 3, 12:30-2:00 p.m.
  • Location: Bureau of Labor Statistics, BLS Training and Conference Center, Rooms 1 and 2, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC. Use the First St. NE entrance. Call Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) at least 2 days
  • before the talk to be placed on the visitors' list and bring photo ID.
  • Sponsor: WSS Data Collection Methods Section
Abstract:

Good survey questions should meet at least half a dozen different standards; meeting some of them actually interferes with meeting others. Moreover, the strategies needed to find out if questions meet these standards require protocols of testing that are much more elaborate than most researchers use today. So, what does all this mean for those who are designing and evaluating survey instruments?

Jack Fowler is a Senior Research Fellow at the Center for Survey Research at UMASS-Boston. He has contributed to the research literature on survey data collection methods in a variety of ways, including a 1995 book, Improving Survey Questions.

Return to top


Title: Desktop Hosting of Web-Based Training: Back to Small, Local, and Personal

  • Speaker: Dr. Kent L. Norman, Department of Psychology and the Human/Computer Interaction Laboratory, University of Maryland
  • Chair: Mike Fleming, National Agricultural Statistics Service
  • Date/Time: Wednesday, June 9, 1999, 12:30 - 2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line-Union Station). Enter at 1st Street and Massachusetts Avenue. Please call Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) to be placed on the visitor's list.
  • Sponsor: Statistical Computing Section
Abstract:

Many instructors are turning to the WWW to host the materials and interactions for distance education and classroom-bound courses. Desktop hosting is becoming more and more feasible; and emerging software is making it easier and more cost effective than institutionally provided servers. The pros and cons of the central (insitutional) versus distributed (personal) approaches involve pragmatics, academic freedom, intellectual property rights, and interface design. It is argued that distributed desktop hosting provides instructors with a greater sense of control over and ownership of the course and a greater flexibility to design their own course at all levels of the interface.

HyperCourseware provides a case in point. It will be used to illustrate how one can run one's own server with password protection, use templates for materials, run multi-chat sessions and threaded dialogues, distribute and collect assignments and exams, and even use streaming video for presentations.

e-mail: kent_norman@lap.umd.edu lab: http://www.lap.umd.edu/

Return to top


Topic: Respondent Incentives in Surveys: A Fresh Look

  • Speakers: Nancy Kirkendall, Energy Information Administration (formerly, Office of Management and Budget); Richard A. Kulka, Research Triangle Institute; and Brad Edwards, Westat
  • Chair: Stuart Scott, Bureau of Labor Statistics
  • Date: Thursday, June 10, 12:30-2:00 p.m.
  • Location: Bureau of Labor Statistics, BLS Conference Center, Room 10, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC. Use the First St. NE entrance. Call Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) at least 2 days before the talk to be placed on the visitors' list and bring photo ID.
  • Sponsors: WSS Methodology and Data Collection Sections
  • Presentation material:
    Richard A. Kulka slides
    Brad Edwards slides
    Nancy J. Kirkendal slides
    Providing Incentives to Survey Respondents - a summary report of the 1992 Symposium on Providing Incentives to Survey Respondents held at Harvard University
Abstract:

As it becomes more difficult to achieve high response rates on many surveys, increasing attention has been given to the use of respondent incentives. For federally-funded survey contracts, the Office of Management and Budget has permitted the use of monetary incentives only in special circumstances that have clear and compelling justification. Several years ago OMB convened a symposium to examine the topic of incentives in surveys. The time seems right to revisit the issues. Nancy Kirkendall will give us an OMB perspective on the topic.

Dick Kulka wrote the report from the 1992 symposium on incentives. He will give an update on the issues and a summary of research on incentives since the Harvard session. Brad Edwards will report on questions that have dominated incentive discussions at Westat during the past two years, and that have led to a series of experiments. The panelists will address the following questions in a seminar format: When are incentives justified? What are the relevant ethical and economic perspectives? How can incentives be used effectively on telephone surveys? What are the effects of non-monetary incentives, in contrast to cash?

Material from this seminar is available at Methodology Seminars

Return to top


Topic: Choosing a Variance Computation Method for the Revised Consumer Price Index

  • Speaker: Shawn Jacobson, Bureau of Labor Statistics
  • Discussant: Richard Valliant, Westat
  • Chair: Linda Atkinson, Economic Research Service
  • Day/Time: Wednesday, June 16, 1999, 12:30 - 2:00 p.m.
  • Location: BLS, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line - Union Station). Enter at Massachusetts Avenue and North Capitol Street. Contact Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) at least 2 days before talk to be placed on the visitors' list and bring photo id.
  • Sponsor: Economics Section
Abstract:

Until December 1997, the Consumer Price Index (CPI) estimation system used a hybrid linearization random groups method to compute variances. This method was compared to several alternative combinations of variance methodology and computer package including: a hybrid linearization method implemented using SUDAAN; a stratified random group method implemented using VPLX; and balanced repeated replication and unstratified jackknife methods implemented using WesVarPC. Variances were computed for several index series using each candidate method. The methods were evaluated on the basis of their stability (used as a proxy for the variance of the variance estimates); the VPLX methodology was found to be superior to the other candidate methods.

This presentation gives background on what CPI data are available to the variance system. The candidate variance methods are then described along with required modifications to the CPI data. Results are given for combinations of studied index series and variance calculation method. Several non-numeric issues, such as computation time and variance computation cost are discussed. Finally, the author discusses the emulation of VPLX methodology using SAS.

Return to top


Topic: Census Household Survey Redesigns

  • Speaker: Leonard Baer and Mackey Lewis, Demographics Statistical Methods Division, Bureau of the Census
  • Discussant: Steve Cohen, Bureau of Labor Statistics
  • Chair: Stuart Scott, Bureau of Labor Statistics
  • Date/Time: Thursday, June 17; 12:30 - 2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The first part of the seminar presents a look at major improvements from past sample redesigns for demographic surveys and the accompanying gains and tradeoffs of each. The second part of the seminar looks ahead to the 2000 Sample Redesign. Each phase of redesign is described along with the research issues, initial plans and goals for improvements. Among the surveys covered by this work are the Current Population Survey, National Health Interview Survey, National Crime Victimization Survey, and American Housing Surveys.

Return to top


Topic: Quality in the Government: USPS Quality Process for Improving Timely Delivery

  • Speakers: Alan K. Jeeves, Principal Statistician, Customer Satisfaction Measurement, Consumer Affairs; and Lizbeth J. Dobbins, Manager, Customer Satisfaction Measurement, Consumer Affairs
  • Chair: Amrut Champaneri, U. S. Department of Agriculture
  • Date/Time: Friday, June 18, 1999, 12:30 2:30 p.m.
  • Place: Bureau of Labor Statistics, 2 Massachusetts Avenue, Room 2990, Washington, DC (Metro Red Line Union Station). Use the First St. NE entrance. To be placed on the visitors' list, contact Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) no later than Wednesday, June 16.
  • Sponsor: Quality Section, Washington Statistical Society
Abstract:

CustomerPerfect! is the quality process that has been adopted by the USPS, and it is based on the Malcolm Baldrige criteria for performance excellence. It is a systematic approach to driving the best business results for the company. Voice of the Customer goals have been established to improve customer satisfaction by focusing on timely delivery, accuracy, consistency, and ease of use.

The External First-Class Measurement System (EXFC) is the scorecard by which the Postal Service assesses the quality of its First-Class delivery service. Field managers are held directly accountable for its results, which also influences their team-based pay.

Evolving changes in corporate focus mandated significant changes in the statistical design and methodology of EXFC, as well as an expansion of the system coverage area. However, these changes had to be implemented even as the system continued to generate information upon which strategic corporate decisions were being based. Since EXFC data are the official source of service performance quality information, it was essential that data from the expanded system link back to the earlier data.

These changes had to be very carefully planned. Since EXFC results are used to drive performance quality, a parallel system would have been untenable, as it would have generated discrepant measurements of the same result. An incorrect implementation would have adversely affected the USPS quality process as well as basis of corporate strategic planning.

This presentation will describe how the Postal Service implemented major changes to this critical measurement system of service performance quality without any interruption in the information flow.

Return to top


Topic: Planning for the Early Childhood Longitudinal Study -- Birth Cohort 2000 (ECLS-B)

  • Speakers: Brad Edwards and James Green from Westat and Jerry West, NCES
  • Chair: Jerry West, NCES
  • Date/Time: Tuesday, June 22, 1999; 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, Room 2, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Metro Red Line-Union Station). Use the First ST., NE entrance. Call Karen Jackson at (202) 606-7524 (email: Karen_Jackson@bls.gov) at least 2 days before talk to be placed on the visitor list, and bring a photo ID.
  • Sponsor: Social and Demographic Statistics Section
Abstract:

Return to top


Abstract:

The Early Childhood Longitudinal Study -- Birth Cohort 2000 (ECLS-B) is a new project created by the National Center for Education Statistics in collaboration with other federal agencies. It is designed to provide detailed information about children's early life experiences. The ECLS-B looks at children's health, development, care, and education during the formative years from birth through first grade. This session will discuss 3 aspects of bringing the project into being: (1) Crafting working relationships with many other agencies that have strong interests in children's issues (NCHS, NIH, USDA, ACYF, MCHB, OSEP, etc.) to forge a joint approach that takes many policy interests into account within the ECLS-B framework; (2) issues in designing a sample of newborns, including research on sampling by occurrence versus residence and quantifying the relative inefficiencies of using existing PSU samples, and the mathematical programming approach to sample allocation given three domains created by the analytic subgroups; and (3) technical challenges in survey operations -- incorporating computer-assisted personal interviewing (CAPI) with direct assessments of infants and toddlers, addressing longitudinal issues from the outset of the project design, collecting data from non-resident fathers, conducting experiments on the relative effects of cash and non-cash incentives, and training lay field staff to perform non-traditional survey activities.

Return to top


Title: Correcting estimated relative risk for dietary measurement error: Have we been misleading ourselves?

  • Speaker: Victor Kipnis, Ph.D.
  • Discussant: Theresa Devine, CBO
  • Date/Time: Wednesday, September 1, 11:00 a.m.
  • Location: Conference Room G, Executive Plaza North (EPN), 6130 Executive Blvd, Rockville, MD.
  • Sponsor: NIH
Abstract:

Large-scale nutritional epidemiologic studies usually use foodfrequency questionnaires(FFQs) to measure a person's usual dietary intake over a defined period. Researches have long recognized that FFQ measurement is subject to substantial error that can have a profound impact on assessment of the effect of an exposure on disease. Reference instruments, such as multiple-day food records or 24-hour recalls, that are supposed to provide a best possible surrogate for the individuals' true intake values are currently used to calibrate a FFQ and correct estimated disease risks for measurement error. The standard regression calibration approach requires that a reference measure contain only random within-person error uncorrelated with any error component in the FFQ. Increasing evidence based on studies with biochemical markers suggests that dietary reference instruments are likely to be also flawed with systematic biases, including person-specific bias, which may be correlated with its counterpart in the FFQ. In the talk, a new dietary measurement error model is presented allowing for this more complex error structure in a reference instrument. Using measures of protein intake from different dietary assessment methods, as well as data on urine nitrogen excretion, which is essentially equivalent to protein intake, allows a critical assessment of the proposed model vis-…-vis the standard regression calibration model and its recent modifications proposed in the literature. The results demonstrate that failure to account for complex measurement error structure in the reference instrument and correlation between person-specific biases in the reference instrument and FFQ leads to substantial underestimation of the relative risk for a nutrient.

Return to top


Topic: Event History Analysis of Interviewer and Respondent Survey Behavior

  • Speaker: James Lepkowski, University of Michigan
    (joint work with Vivian Siu and Justin Fisher)
  • Discussant: Fred Conrad, Bureau of Labor Statistics
  • Chair: Virginia deWolf, Office of Management & Budget
  • Date/Time: Thursday, September 9, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, room 2990 (Cognitive Lab), Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524)at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The survey interview may be viewed as a longitudinal sequence of conversational exchanges between an interviewer and a respondent. Interviewer and respondent behavior reflect the dynamics of the interview and the mutual effects of interviewer behavior on respondents, and respondents on interviewer behavior. Event history models can be used to examine the timing of these behaviors and whether respondents modify the way they answer questions in response to interviewer behaviors, or whether interviewers modify their interviewing techniques in response to respondent behaviors.

A total of 297 interviews from a sample survey of members of a health maintenance organization in a metropolitan area in the United States were tape recorded, with subject permission. Survey interviewers not participating in the survey interviews were trained to listen to the tapes and record the presence of approximately 30 different types of interviewer and respondent behaviors at each question asked in the interview. Respondent behaviors such as laughter during an exchange and interrupting the reading of the question are examined as events occurring during the interview using standard event history analysis methods. Duration times to laughter or interruption vary across gender and race of respondents and gender and race of interviewer, and by the occurrence of the event (i.e., first, second, third, etc.). Cox proportional hazard models illustrate the association of respondent and interviewer characteristics on duration times. We also report on initial exploration of the occurrence of a question characteristic as a time varying covariate for interruption. These latter analyses seek to explain whether respondent behaviors, such as interrupting question reading, are a function of exposure to poorly written questions.

Return to top


Topic: Balancing Confidentiality and Burden Concerns in Surveys and Censuses of Large Businesses

  • Speaker: Elizabeth Nichols, Bureau of the Census
    (joint work with Diane Willimack, Bureau of the Census, and Seymour Sudman, University of Illinois, Urbana-Champaign)
  • Discussant: Nancy Kirkendall, Energy Information Administration
  • Chair: George Werking, Bureau of Labor Statistics
  • Date/Time: Thursday, September 23, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, rooms 9 & 10, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson( 202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

There is a growing privacy concern among the general public about the sharing of personal information. Government statistical agencies such as the Census Bureau go to great lengths to make sure data collected are kept strictly confidential, and sharing of data between government agencies is severely restricted. Discussions of confidentiality of company data typically ignore concerns about the burden placed on companies that supply data to multiple government agencies. From the perspective of many large companies, however, burden is of great concern since these companies, because of their size, are respondents in all surveys.

During the past year, staff from the Census Bureau visited 30 large multi-unit companies with the goal of identifying ways to ease their reporting burden. Companies acknowledged their data are confidential, but much of what we ask typically has been released earlier to shareholders and upper management. Thus, it is already considered to some extent public information. Companies put much more emphasis on the number of survey requests placed on them, since many of them seem to ask for >similar information from their perspective. Large companies generally supported data sharing among statistical agencies under well-specified conditions and with rigorous security and confidentiality provisions in place. They only saw value in this sharing, however, if it reduced the reporting burden placed on them. Medium and smaller sized private companies may have greater concerns about confidentiality, and lesser concerns about burden, but the large firms make the largest contribution to estimates and their concerns about burden need addressing. In this presentation we make some suggestions as to how burden might be reduced.

Return to top


Topic: The American Community Survey: A Status Report for Statisticians

  • Speaker: Chip Alexander, Bureau of the Census
  • Discussant: Joe Waksberg, Westat
  • Chair: Dwight Brock, National Institute on Aging
  • Date/Time: Thursday, September 30, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, rooms 2 & 3, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The American Community Survey is a new program to collect "census long form data" continuously throughout the decade. The program has important implications for the current users of long form data, including federal government household survey programs and program agencies that use census data. It also has important implications for the 2010 census. The ACS has finished its "demonstration period" of testing and is now being conducted in 31 sites across the country as the start of a comparison of the survey results to the 2000 census long form data.

This talk reviews our current information about statistical aspects of the American Community Survey, including (1) results to date and further research plans for comparing it to the census long form, (2) thoughts about the most promising uses by other Federal survey programs, (3) uses of multi-year data in analysis, (4) development of the address file, and (5) implications for the 2010 census.

Return to top


Title: Membership Functions and Probability Measures of Fuzzy Sets

  • Speaker: Nozer D. Singpurwalla, The George Washington University
  • Time: 4:00 pm, Oct. 1, 1999
  • Location: Funger 323, 2201 G Street NW, Foggy Bottom metro stop on the blue and orange line.
  • Organizer: Engineering Management and Systems Analysis and Statistics Departments, The George Washington University
Abstract:

Membership functions characterize fuzzy sets. A calculus for these functions has been proposed by Zadeh, and has been suggested by him as an alternative to the calculus of probability. His claim is that the calculus of probability is inadequate for describing all types of uncertainty. The alternative calculus is termed possibility theory. Possibility Theory has been criticized on grounds that it does not have an axiomatic foundation based on primitives that are natural.

This talk is in two parts: the first is expository; it overviews fuzzy sets, membership functions and possibility theory. The second part shows how membership functions can be used to obtain probability measures of fuzzy sets. The crux of the idea behind the latter is to conceptualize the decision maker as probabilist who elicits expert testimony via the membership function. Our conceptualization makes the process coherent vis a vis the behaviouristic axioms of Ramsey and Savage.

For a complete list of upcoming seminars check the dept's seminar web site: http://www.gwu.edu/~stat/seminars/fall99.html Return to top

Title: Saving for a Rainy Day: Does Pre-Retirement Access To Retirement Saving Increase Retirement Saving?

  • Speaker: Thomas Hungerford, Social Security Administration
  • Discussant: Eric Engen, Federal Reserve Board
  • Chair: Arthur Kennickell, Federal Reserve Board
  • Date/Time: Wednesday, October 6, 1999, 12:30-2:00 P.M.
  • Location: Bureau of Labor Statistics, 2 Massachusetts Ave. NE, Room 2990 (Cognitive Laboratory). Enter at Massachusettes Avenue and North Capitol Street (Red Line: Union Station). Visitors outside the BLS, please call Karen Jackson at (202) 606-7524 at least two days in advance to have your name placed on the guard's list for admittance. Please bring a photo id.
  • Sponsor: Economics Section
Abstract:

While overall pension coverage has remained relatively stable over the past 20 years, the composition of pension coverage has changed dramatically-the number of defined benefit plans has been decreasing and the number of defined contribution pension plans, especially 401(k) plans, has been growing. Unlike a defined benefit plan, participation in a 401(k) plan is voluntary and workers determine how much to contribute to their pension account. Workers are increasingly responsible for ensuring most of their retirement income security. A combination of carrots and sticks are used to encourage workers to save for retirement in 401(k) plans. Among the carrots used is allowing participants to borrow from their pension accounts. Advocates of pension plan borrowing argue that withdrawal provisions are an incentive to lower-income workers to participate in and make contributions to voluntary pension plan accounts. Opponents to these provisions argue that withdrawal provisions works against the policy objective of enhancing retirement income. This paper examines the validity of the argument to allow pension plan borrowing (i.e., whether or not allowing pre-retirement withdrawals enhances participation in and contributions to 401(k) plans). The results present evidence that allowing workers pre-retirement access to their 401(k) accounts increases both participation in and contributions to 401(k) plans.

Return to top


Topic: Variance Estimation 1: Accounting for Imputation
(first of two-part series)

  • Speaker: Jill M. Montaquila, Westat
  • Discussant: Sylvia Leaver, Bureau of Labor Statistics
  • Chair: Bob Jernigan, American University
  • Date/Time: Tuesday, October 12, 12:30 - 2:00 p.m.
  • Location: BLS Cognitive Lab, room 2990, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

For the past several decades, imputation methods have been used to compensate for item nonresponse. Traditionally, the imputed values have been treated as if they had actually been observed or reported, and variance estimates have been computed using standard complete data methods. This approach leads to underestimation of the variance of the estimator. Even for items with relatively low nonresponse, this downward bias in the variance estimates may be significant.

In this talk, we demonstrate the bias that may result when imputation error variance is not accounted for in the variance estimator. We review methodological approaches for accounting for imputation error variance, including the Rao-Shao jackknife, the bootstrap method, a model-assisted approach, and multiple imputation. A new method for accounting for imputation error variance, all-cases imputation, is described. The all-cases imputation (ACI) method imputes a value of the characteristic of interest for all cases, including those with actual (observed or reported) values. The difference between the imputed value and the actual value (imputation error) for respondents is used to estimate the imputation error variance and covariance for nonrespondents. We demonstrate, both analytically and empirically, the properties of our proposed variance estimator. This talk includes a discussion of applications of the ACI method.

Return to top


The 1998 Morris Hansen Lecture

Title: Diagnostics for Modeling and Adjustment of Seasonal Data

  • Speaker: David F. Findley, U.S. Census Bureau
  • Discussants:
    Allan Young, former director of the Bureau of Economic Analysis
    William P. Cleveland, Federal Reserve Board
  • Chair: Nancy J. Kirkendall, Energy Information Administration
  • Date/Time: Tuesday, October 26, 1999; 3:30 - 5:30 p.m.
  • Location: The Jefferson Auditorium, USDA South Building, between 12th and 14th Streets on Independence Avenue S.W., Washington DC. The Independence Avenue exit from the Smithsonian METRO stop is at the 12th Street corner of the building, which is also where the handicapped entrance is located.
  • Sponsors: The Washington Statistical Society, WESTAT, and the National Agricultural Statistics Service.
  • Reception: The lecture will be followed by a reception from 5:30 to 6:30 in the patio of the Jamie L. Whitten Building, across Independence Ave.
Abstract:

Seasonal adjustment is applied to economic data from periodic surveys to help analysts detect economically meaningful changes in the time series. Adjustment can dramatically alter the measured values. The time series to which it is applied usually have seasonal movements whose pattern changes over time, and many are subject to unsystematic effects, for example abrupt changes of level or the turbulent, market-driven behavior of small numbers of large companies. There are also systematic effects that require appropriate treatment, such as heteroskedasticity between different calendar months, day of week effects, and effects due to the shifting dates of holidays such as Easter.

Progress in seasonal adjustment depends on the development of models or methods that better account for these effects. Perhaps less obviously, progress also depends on the development of diagnostics that identify the presence of such effects and show whether treatments applied for them result in satisfactory adjustments. Much recent progress has come from the use of specialized time series models and appropriately focused diagnostics, especially graphical diagnostics.

Our talk begins with a review of basic motivations, ideas, and issues of seasonal adjustment. There follows a presentation of some seasonal adjustment, modeling, and model selection diagnostics of the Census Bureau's new X-12-ARIMA and X-12-Graph programs, with an emphasis on graphical diagnostics. The applications we provide of these diagnostics show how some of them address fundamental statistical modeling issues.

Return to top


Title: Rethinking Historical Controls

  • Speaker: Stuart G. Baker
  • Date/Time: Wednesday Nov 3, 1999 11:00 a.m. to noon
  • Location: Conference Room G, Executive Plaza North (EPN), 6130 Executive Blvd, Rockville, MD
  • Sponsor: NIH
Abstract:

Traditionally when making inference from historical controls, one compares the new treatment in a current series of patients with the old treatment in a previous series of patients. It is well known that inference based on traditional historical controls is subject to a strong selection bias. To avoid this bias, Baker and Lindeman (1994) proposed a paired availability design in which hospitals undergo a sudden change in theavailability of a new treatment, and for each hospital one compares outcomes among all eligible subjects before and after the change. A model for all-or-none compliance is used toestimate the effect of receipt of new treatment in each hospital and estimates are combined over all hospitals. To analyzeretrospective data in which hospitals underwent either a sudden or gradual change in availability, Baker and Lindeman (1999) generalized the model for all-or-none compliance to multiple time periods. In an application involving the effect of epidural analgesia on the probability of Cesarean section, the point estimate was similar to that from a meta-analysis of randomized trials (but with a tighter confidence interval) and differed substantially from a propensity score estimate from concurrent controls. Return to top

Topic: Variance Estimation: Effects of Weight Adjustments
(second of two-part series)

  • Speakers: Mike Brick & David Morganstein, Westat
    (joint work with Brandon Barrett, Westat)
  • Discussant: Bob Fay, Bureau of the Census
  • Chair: Phil Kott, National Agriculture
  • Date/Time: Thursday, November 4, 12:30 - 2:00 p.m.
  • Location: BLS Room 2990 (Cognitive Lab), Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The computation of estimation weights for most sample surveys includes adjustments for nonresponse. In addition, survey weights are often adjusted so they sum to known control totals using calibration methods such as poststratification or raking. These adjustments are often elaborate and deemed essential for improving the statistical properties of the estimates. The weight adjustments can reduce biases and can insure that certain critical statistics match either census figures or population estimates known with high reliability. Although weight adjustments affect the variance of the estimates, the effect may be largely ignored or overlooked when variance estimates are computed.

This talk examines the effect of some aspects of weight adjustments on the variances of estimates by using replication methods of variance estimation. The presenters begin with a brief review of replication variance estimation methods. Next, they give empirical results on the effect of weight adjustments on the variances of estimates from a national survey. Lastly, they give an approximation to the effect of a finite population correction factor on the variance of estimates for two-stage samples. Return to top

Topic: Using Auxiliary Data to Improve the Survey Frame and Reduce Survey Costs

  • Speaker: Debra Taylor, Statistics New Zealand
  • Discussant: Louis Rizzo, Westat
  • Chair: Richard Moore, Bureau of the Census
  • Date/Time: Tuesday, November 9, 12:30 - 2:00 p.m.
  • Location: BLS Room 2990 (Cognitive Lab), Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson(202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The paper will explore the issue of developing efficient surveys of Maori in New Zealand, highlight drawbacks to the standard area frame approach, and discuss possible improvements achieved by combining an area frame with an electoral roll to improve the targetting of the population of interest. It will discuss both theoretical and practical issues raised. Return to top

Topic: Policy Options for Reducing Poverty Among Elderly Women

  • Speaker: Theresa J. Devine, Congressional Budget Office
  • Discussant: Patricia Ruggles, Department of Health and Human Services
  • Chair: Arthur Kennickell, Federal Reserve
  • Date/Time: Wednesday, November 10, 1999; 12:30 p.m. - 2:00 p.m.
  • Location: Bureau of Labor Statistics, 2 Massachusetts Ave. NE, Room 2990 (Cognitive Laboratory). Enter at Massachusettes Avenue and North Capitol Street (Red Line: Union Station). Visitors outside the BLS, please call Karen Jackson at (202) 606-7524 at least two days in advance to have your name placed on the guard's list for admittance. Please bring a photo id.
  • Sponsor: Economics Section
Abstract:

The poverty rate for all women ages 65 and older was 13.1percent in 1997 -- nearly twice the poverty rate for older men. For older women who lived alone, the poverty rate was 22.4 percent. These high rates are expected to continue if current policy is unchanged. Because Social Security is a central part of the safety net for older women today, with nearly all women ages 65 and older receiving some Social Security benefits and many relying exclusively on this income, proposals to reduce elderly poverty have naturally centered on changes in Social Security rules. Increases in minimum benefits and other policies have been proposed as ways to focus Social Security more on reducing poverty among the elderly.

This paper examines poverty among older women and a range of policy options that have been proposed as ways to reduce it. For each option, the paper asks: How much would the option raise the incomes of older women who are otherwise most likely to be poor? How would the option affect the incomes of other beneficiaries? How would the option affect spending on Social Security and other government programs? How would the option affect the link between payroll taxes and benefits?

Return to top


Title: A Bayesian Exploratory Data Analysis of Spatio-Temporal Patterns in U.S. Homicide Rates

  • Speakers: Balgobin Nandram
    Worcester Polytechnic Institute
    National Center for Health Statistics
    and
    Linda Williams Pickle
    National Cancer Institute
  • Date/time: Tuesday, November 16, 1999, 10:30-11:30 a.m.
  • Location: National Center for Health Statistics, Room 1110, Presidential Building, 6525 Belcrest Rd., Hyattsville, MD 20782. Near the metro station of Prince Georges Plaza (Shopping Mall) on the East-West Highway.
  • Sponsor: Public Health and Biostatistics
Abstract:

Recently, homicide has been declared an important public health problem that could be studied using epidemiological tools. To assess spatio-temporal patterns we study homicide data for 1979-96 which consist of six periods of 3 years over 798 health service areas (HSAs) which are small areas that make up the continental U.S. We fit several Bayesian hierarchical models to the overdispersed Poisson data which include socioeconomic covariates, correlation over time, and different functional forms of age effects. Several Bayesian diagnostic procedures are used to select one of the models, and then to further assess the goodness of fit of the selected model. Hypotheses about the patterns are tested using the Bayes factor with the selected model, and maps are used to display the patterns. One particular pattern of rates over time fits most HSAs well. Other patterns, including a decreasing trend, are also plausible for some age classes for the entire U.S. and for some subsets of regions, levels of urbanization, states and HSAs.

Return to top


Topic: A Cognitive Study of Methods of Classifying Epidemiological Data on a Series of Choropleth Maps

  • Speaker: Cynthia Brewer, Associate Professor, Department of Geography, The Pennsylvania State University, University Park, PA 16802
  • Chair: Linda Pickle, National Cancer Institute
  • Date/time: Friday, November 19, 1999, 10 a.m. -11a.m.
  • Location: Executive Plaza North, National Cancer Institute, 1st floor Conference rooms D/E/F; 6130 Executive Blvd., Rockville, MD, near White Flint metro stop.
  • Sponsor: NCI GIS Special Interest Group and the WSS Public Health & Biostatistics section
Abstract:

Choropleth maps show enumeration areas, such as counties, with color fills keyed to ranges in the mapped data. For example, a map of lung cancer rates may use a light yellow to represent low rates with gradations through orange to a dark red for high rates. This type of map is a standard method for showing distributions of disease rates and wide ranging socioeconomic variables. The mapmaker's decisions about the ranges in the data represented by each color affect the appearance of map patterns, and there are many ways to systematically divide the data range into data classes. Classification options are readily available in mapping and GIS software, but basic guidance for their application is lacking. We used 28 data sets and 56 subjects to examine the effects on both accuracy and confidence in user responses of seven five-class map classification schemes: hybrid equal intervals, quantiles (e.g. percentile), box-plot based classes, mean and standard deviation based classes, Jenks optimization, minimized boundary error, and constant map area in shared classes. Map test questions ranged from questions about particular polygons to differences in whole map patterns. We placed particular emphasis on the effects of classification on map-to-map comparisons in our research design. User-testing results indicated that the accuracy of responses varied significantly between classifications, with responses to quantile maps being most accurate.

Return to top


Topic: The Atlas of Cancer Mortality in the United States, 1950-94 and Its Associated Web Sites

  • Speaker: Dan Grauman, Division of Cancer Epidemiology and Genetics, National Cancer Institute
  • Chair: Linda Pickle, National Cancer Institute
  • Date/time: Friday, November 19, 1999, 11:15 a.m. - 12:30 p.m.
  • Location: Executive Plaza North, National Cancer Institute, 1st floor Conference rooms D/E/F; 6130 Executive Blvd., Rockville, MD, near White Flint metro stop.
  • Sponsor: NCI GIS Special Interest Group and the WSS Public Health & Biostatistics section
Abstract:

The geographic patterns of cancer around the world and within countries have provided important clues to the environmental and occupational determinants of cancer. In the mid-1970s the National Cancer Institute prepared county-based maps of cancer mortality in the U.S. that identified distinctive variations and hot-spots for specific tumors, thus prompting a series of analytic studies of cancer in high-risk areas of the country. We have prepared an updated atlas of cancer mortality in the United States during 1950-94, based on mortality data from the National Center for Health Statistics and population estimates from the Census Bureau. Rates per 100,000 person-years, directly standardized using the 1970 US population, were calculated by race (whites, blacks) and gender for 40 forms of cancer. The new atlas includes more than 140 computerized color-coded maps showing variation in rates during 1970-94 at the county (more than 3000 counties) or State Economic Area (more than 500 units) level. Summary tables and figures are also presented. Over 100 maps for the 1950-69 period are also included. Accompanying text describes the observed variations and suggests explanations based in part on the findings of analytic studies stimulated by the previous atlases. The geographic patterns of cancer displayed in this atlas should help to target further research into the causes and control of cancer.

Two Web sites associated with the atlas will be demonstrated. The first, a static Web site, enables the user to view the entire contents of the atlas, as well as to download graphic images and data used to generate the maps. The second Web site is dynamic, and allows the user to change the number of ranges and ranging method. The user can also focus on a specific geographic region.

Return to top


Topic: Estimation & Variance Estimation in a Standardized Economic Processing System

  • Speaker: Richard Sigman, Census Bureau
  • Discussant: Richard Valliant, Westat
  • Chair: Jim Gentle, George Mason University
  • Date/Time: Tuesday, November 23, 12:30 - 2:00 p.m.
  • Location: BLS Room 2990 (Cognitive Lab), Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson(202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

The U.S. Census Bureau is developing a generalized survey processing system for economic surveys, called theStandardized Economic Processing System (StEPS). StEPS will replace 15 separate systems that are being used to process 113 current economic surveys. This talk will briefly describe the overall design of StEPS and discuss in detail the statistical methods and operational design of the StEPS modules forestimation and variance estimation.

For estimation, we found that a single computation approach, based in part on the generalized regression estimator, was successful in calculating all needed estimates, except for quantiles. For variance estimation, however, we found itnecessary to provide StEPS users with different software options depending on the sample design and method of variance estimation.

StEPS has separate submodules for Poisson-sampling variances, Tille-sampling variances, variances calculated using the method of random groups, and variances calculated by VPLX usingpseudo-replication methods.

The StEPS modules for estimation and variance estimation consists of the following design elements:
  • Reusable SAS macros;

  • Metadata files describing survey data and resulting estimates;

  • Two types of data files: (1) a general-purpose "skinny" file and (2) a survey-specific "fat" file, automatically generated from the "skinny" file under control of metadata; and

  • A survey-specific "script", which controls the estimation and variance-estimation processing for a particular survey.
We will chronicle our experiences from 1997 through 1999 in using the StEPS estimation and variance estimation modules to migrate surveys into StEPS. We conclude with a discussion of possible future enhancements to the estimation and variance estimation functions in StEPS. Return to top

Topic: Estimation from Linked Surveys
(Three-way Video Conference - Michigan, Maryland, BLS)

  • Speaker: Trivellore Raghunathan, University of Michigan & JPSM
  • Date/Time: Thursday, December 2, 12:10 - 1:00 p.m.
  • Locations: (1) University of Maryland, 1218 Lefrak Hall (a map to the location is found on www.jpsm.umd.edu or call 301 314 7911); (2) BLS Conference Center, room 9, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson (202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsors: JPSM & WSS Methodology Section
Abstract:

The objective of this talk is to present methods that improve efficiency of estimates of the population quantities by using auxiliary data from linked surveys. Estimation in two types of linked surveys will be discussed. The first type of link, a "tight link," occurs when the sample pool from a large survey is used as a frame for another survey. An example of such a link is the National Survey of Family Growth that used the National Health Interview Survey sample pool as the frame. The second type of link, a "loose link" occurs when multiple surveys use the same PSU's or share geographical regions, a common feature in many national surveys. A Bayesian framework is used to develop the estimates. The design-based properties of these estimates will be investigated in terms of bias, the mean square error and the confidence interval coverage. The efficiency of estimates with and without using information from linked surveys will be compared. Return to top

Topic: Survey Design & Analysis for Hierarchical Linear Models

  • Speaker: Michael P. Cohen, National Center for Education Statistics
  • Discussant: John Eltinge, Bureau of Labor Statistics
  • Chair: Don Malec, Bureau of the Census
  • Date/Time: Wednesday, December 15, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, Room 3, Postal Square Building, 2 Massachusetts Avenue, NE, Washington, DC, 20212 (Metro Red Line-Union Station). Use the First St. NE entrance. Call Karen Jackson(202-606-7524) at least 2 days before talk to be placed on the visitor list, and bring photo ID.
  • Sponsor: Methodology Section
Abstract:

Social, health, behavioral, and economic data often have a nested structure (for example, students nested within schools or patients within hospitals). Recently techniques and computer programs have become available for dealing with such data, permitting the formulation of explicit hierarchical linear models with hypotheses about effects occurring at each level and across levels. If data users are planning to analyze survey data using hierarchical linear models, rather than concentrating on means, totals, and proportions, this needs to be accounted for in the survey design. The implications for determining sample sizes (for example, the number of schools in the sample and the number of students sampled within each school) in large national surveys are explored. In addition, software and books for hierarchical linear modeling will be discussed. Return to top

Seminar Archives

2024 2023
2022 2021 2020 2019
2018 2017 2016 2015
2014 2013 2012 2011
2010 2009 2008 2007
2006 2005 2004 2003
2002 2001 2000 1999
1998 1997 1996 1995  

Methodology