Washington Statistical Society on Meetup   Washington Statistical Society on LinkedIn

Washington Statistical Society Seminar Archive: 1996

January
9 Tues. Using Noise for Disclosure Limitation of Establishment Tabular Data
19 Fri. Statistical Methods for Detecting and Characterizing Departures from Additivity in Multi-Dimensional Drug/Chemical Mixtures
23 Thur. Parameter Estimation for a Normal Mixture Model with Migration Among Subpopulations, in an Application to Record Linkage
February
8 Thur. Rank Invariant Score Tests for Interval Censored Data
15 Thur. Maximizing and Minimizing Overlap When Selecting a Large Number of Units Per Stratum
21 Wed. Marginal Regression Models for Ordered Categorical Item Responses
Rescheduled from November 15
March
7 Thurs. A Race To The Bottom: The Case of Unemployment Insurance
7 Thurs. Impacts of Sampling for Nonresponse Follow-up on Census Methodology for a One Number Census
13 Wed. Estimation of Variance Components for the U. S. Consumer Price Index
20 Wed. MACROECONOMICS: PRO(jections) and CON(jectures)
25 Mon. The Commerce Department-Wide Customer Satisfaction Survey: From Start to Finish
April
4 Thurs. How many benchmarks?
11 Thurs. The Competing Risks of Mortgage Terminations by Default and Prepayment?
16 Tues. The Transformational Performance Management Model
16 Tues. Cosmeticization and Calibration of Sample Survey Estimators
16 Tues. The Information at the Frontiers of Statistics
17 Wed. New Measures of Contingent and Alternative Employment
18 Thurs. A Bayesian Treasure Hunt in Flatland
23 Tues. A Conditional Power Trip on Fisher's LSD
24 Wed. Using Noise for Disclosure Limitation of Establishment Tabular Data
May
2 Thurs. Replicate Variance Estimation in Stratified Sampling with Permanent Randon Numbers
7 Tues. Issues in Adaptive Sampling
8 Wed. Design To Diagnosis: A Model Based Approach To Analyzing Multiple Indicator Outcomes
8 Wed. The Prevalence of Answering Machine Usage in Agricultural Survey Populations
13 Mon. Modeling Multivariate Discrete Failure Time Data
23 Thurs. Quality Improvement Driven Government
29 Wed. Knowledge-based Classification of Survey Data
June
11 Tues. Changing Inequality in the U.S. from 1980-1994: A Consumption Viewpoint
18 Tues. PRESIDENTIAL INVITED ADDRESS:
Towards a Unified Federal Approach to Statistical Confidentiality
19 Wed. Time Series Decomposition of Periodic Survey Data with Autocorrelated Errors
26 Wed. A New Algorithm for Nonlinear Least Squares Estimation: Application to the Langmuir Model
September
4 Wed. A Permutational Step-up Method of Testing Multiple Outcomes
10 Tues. "Empirical Bayes Analysis of Large Occupational Mortality Contingency Tables"(Acrobat File)
18 Wed. Effects of Time and Salience on Expenditure Recall Accuracy in Diary Surveys
25 Wed. Intervention Analysis with Control Groups
October
8 Tues. Markov Models for Longitudinal Data from Complex
10 Thurs. Retrospective Recall: A Review of the Effects of Length of Recall on the Quality of Survey Data
15 Tues. 1996 Roger Herriot Award Methodology Session
22 Tues. Adjusting for a Calendar Effect in Employment Time Series
23 Wed. CANCELLEDFrequency Valid Multiple Imputation For Surveys With a Complex Design
30 Wed. SIXTH ANNUAL MORRIS HANSEN LECTUREThe Hansen Era: Statistical Research at the Census Bureau,1940-1970
31 Thurs. Credit Risk, Credit Scoring, and the Performance of Home Mortgages
November
7 Thur. Sampling and Estimation from Multiple List Frames
13 Wed. The Impacts of Two-Year College on the Labor Market and Schooling Experiences of Young Men
19 Tues. The Impact of Incentives in a Government Survey WSS Statistics and Public Policy Programs, 1996-97
20 Wed. Election Night Projections
December
3 Tues. How GPRA will affect you: A panel discussion on the policy implications of the Government Performance and Results Act
4 Wed. Scheduling Periodic Examinations for the Early Detection of Disease: Applications to Breast Cancer
11 Wed. Small-Biz Blarney
17 Tues. 100 Years of Meetings: A Celebration of the First American Statistical Association Meeting in Washington

Topic: Using Noise for Disclosure Limitation of Establishment Tabular Data

  • Speaker: B. Timothy Evans, Bureau of the Census
  • Discussant: Sallie Keller-McNulty, National Science Foundation
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Tuesday, January 9, 1996, 12:30 - 2:00 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-4934) or Sandy West (202-606-7384) to be placed on the visitor's list.
Abstract:

The Bureau of the Census is looking into new methods of disclosure limitation for use with establishment tabular data. Currently we use a strategy that suppresses a cell in a table if the publication of that cell could potentially lead to the disclosure of an individual respondent's data. In our search for an alternative to cell suppression that would allow us to publish more data, we are considering the option of adding noise to our underlying microdata. By perturbing each respondent's data, we can provide protection to individual respondents without having to suppress cell totals.

Adding noise has several advantages over cell suppression. Cell suppression is a complicated and time-consuming procedure. Also, cell suppression must be performed separately for each data product, whereas noise need only be added once. The question remains, however, as to the utility of the data after noise is added. In this paper we discuss the advantages and disadvantages of adding noise to microdata, with respect to both disclosure issues and the behavior of published estimates. Return to top

Topic: Statistical Methods for Detecting and Characterizing Departures from Additivity in Multi-Dimensional Drug/Chemical Mixtures

  • Speaker: Dr. Kathy Dawson, Medical College of Virginia, Virginia Commonwealth University
  • Chair: Dr. Michael Fay, National Cancer Institute
  • Date/Time: Friday, January 19, 1996, 2:00 - 3:30 p.m.
  • Location: Executive Plaza North, 6130 Executive Plaza Boulevard, Rockville, MD. From Bethesda go North on Old Georgetown Road about 4-5 miles. Executive Plaza Boulevard intersects Old Georgetown. Turn right onto Executive Plaza Boulevard.
  • Sponsor: Public Health and Biostatistics
Abstract:

In studies of the effects of multiple drug or chemical combinations, one goal may be to detect and characterize the interactions between the agents. The techniques currently applied to this problem have limitations when the experiments involve more than 2 agents. Certain response-surface techniques require an unrealistic number of observations for studies involving a large number of agents. Current graphical methods are impossible to use in studies of 3 or more agents. In this talk two statistical techniques will be described that can be applied to studies with an unlimited number of agents. In the first approach, dose combinations are collected along rays or at fixed ratios. Using properties of this experimental design, an additive model is derived. Comparing the fitted dose-response curve along each ray to the curve predicted under additivity, synergistic and antagonistic interactions between the agents can be detected. Statistical testing procedures are given to determine if these are significant interactions, not due to random fluctuations in the data. Graphical techniques that enhance the interpretation of the results are described. The second approach is a point-wise test which determines if the agents interact in an nonadditive manner. This test can be applied to each dose combination of interest. After applying a multiple comparison adjustment to the resulting p-values, departures from additivity can then be characterized. These approaches are likely to be more economical than current techniques, implying that a larger number of agents can be studied in combination for the same experimental effort. Return to top
Third in Series on Non-traditional Applications of Biostatistical Methods

Topic: Parameter Estimation for a Normal Mixture Model with Migration Among Subpopulations, in an Application to Record Linkage

  • Speakers: John Horm, National Center for Health Statistics and William D. Schafer, University of Maryland
  • Discussant: William E. Winkler, Bureau of the Census
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Thursday, January 23, 1996, 12:30 - 1:30 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-4934) or Sandy West (202-606-7384) to be placed on the visitor's list.
Abstract:

Motivated by an application in the field of record linkage between two files with imperfect identifiers, an extension to a two- component normal mixture model has been developed. This model, a normal mixture model with migration, allows for members of one subpopulation to migrate or change membership to the second subpopulation. The presentation will include an introduction to normal mixture models and their applications then the extension to the normal mixture model with migration.

The mathematical form of the model will be presented along with graphical representations. Methods of estimating the model parameters via constrained maximum likelihood are discussed and estimation results presented. Specific applications to a probabilistic record linkage application will be discussed where classifications are expected to change as the result of successive matches. Return to top

Topic: Rank Invariant Score Tests for Interval Censored Data

  • Speaker: Michael Fay, National Cancer Institute
  • Discussant: Ming Gao Gu, McGill University & National Heart, Lung, & Blood Institute
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Thursday, February 8, 1996, 12:30-2:00 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-2696) or Sandy West (202-606-7384) to be placed on the visitor's list.
Abstract:

In interval censored data we do not observe each response directly but only observe the interval in which each response resides. There are primarily two methods currently used for creating rank tests for interval censored data. The first method was proposed by Self and Grossman (1986) modeled after the tests of Prentice (1978) for right censored data. This method creates a test from the marginal likelihood of the set of ranks which are consistent with the observed intervals. For interval censored data this method may lead to intractable calculations.

The second method models an unknown monotonic transformation of the responses as a set of nuisance parameters and creates a rank invariant score test on a location shift parameter. This is the method we explore. It generalizes tests introduced by Finkelstein (1986) for the proportional hazards model only. This method is closely related to the generalized linear model for ordinal or grouped data. Grouped data are a special case of interval censored data where none of the intervals overlap, and the resulting likelihood in our model is the same as that produced by the generalized linear model where the nuisance parameters are known as cut points.

We study the problems that arise when the number of observation times is large, creating a large number of nuisance parameters. We also introduce a graphical test of the assumption of the location shift model. Return to top

Topic: Maximizing and Minimizing Overlap When Selecting a Large Number of Units Per Stratum

  • Speaker: Lawrence Ernst, Bureau of Labor Statistics
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Thursday, February 15, 1996, 12:30-1:30 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-2696) or Sandy West (202-606-7384) to be placed on the visitor's list.
Abstract:

A number of procedures have been developed over the last five decades, beginning with 1951 work of Keyfitz, for maximizing or minimizing the overlap of sampling units for two stratified designs for which the units are selected with probability proportional to size, using size measures which are generally different for the two designs. In this seminar we briefly review this past work and then present two new overlap procedures.

The first new procedure can be used to increase or decrease the overlap of sampling units when a large number of units are selected per stratum and the samples for the two designs must be selected sequentially, as is the case when the second design is a redesign of the first design. Most previous overlap procedures are either not applicable at all to other than 1 unit per stratum designs or are not feasible to implement unless the number of units selected per stratum is very small. This procedure does not generally yield an optimal overlap. The procedure was previously presented at the Orlando Joint Statistical Meetings.

The second new procedure is one for maximizing or minimizing overlap when the units can be selected for the two designs simultaneously, as may be the case for two different surveys. The procedure guarantees an optimal overlap if the two designs have identical stratifications, but can still be used, with loss of optimality, if the stratifications differ. This procedure, like the first procedure, is usable when a large number of units are selected per stratum. The procedure has not been previously presented. Return to top
Rescheduled from November 15, 1995

Topic: Marginal Regression Models for Ordered Categorical Item Responses

  • Speaker: Scott L. Zeger, Ph.D., Johns Hopkins University
  • Discussant: Stuart Baker, Sc.D., National Cancer Institute
  • Chair: Julia L. Bienias, Sc.D., Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Wednesday, February 21, 1996, 12:30-1:30 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-2696) or Sandy West (202-606-7384) to be placed on the visitor's list.
Abstract:

In an investigation of the natural history of disease and disability among elderly women, we attempt to assess physical and mental functioning through a relatively large number of questions and performance-based measures. While the response, "physical function," is easily conceived of, it can only be measured in terms of high dimensional observation.

There are at least three distinct strategies for regression analysis with high-dimensional responses. In the first, we derive summary variables and characterize their change as a function of covariates using standard regression methods. In the second, we posit the existence of underlying,unobservable "function variables" that satisfy a regression model. Our observed items are assumed to be imperfect measures of subsets of these latent variables. In the third approach, the focus of this paper, we simultaneously regress each of the many responses on explanatory variables while also modeling the interdependence among the multivariate responses. The regression coefficients are then summarized to indicate the overall relationship with covariates. This third approach allows us to identify items whose relationship deviates from the majority with respect to dependence on covariates.

This paper will present the third approach which we refer to as "marginal modelling" (Dale, 1986). We simultaneously specify two sets of regression models:

(i) for the mean response of each ordinal item response on; and
(ii) for each pair-wise association.


We first consider maximum likelihood estimation but because it is computationally impractical except in very small problems, an estimating equation alternative is proposed. The methods will be illustrated with analysis of high-dimensional cross-sectional data from the first round of the Johns Hopkins University-NIH Women's Health and Aging Study (WHAS). Return to top

Topic: A Race To The Bottom: The Case of Unemployment Insurance

  • Speaker: Laurie Bassi and Daniel P. McMurrer, Advisory Counsel on Unemployment Compensation
  • Chair: Michael Horrigan, Bureau of Labor Statistics
  • Date/Time: Thursday, March 7, 1996, 12:30-2:00 p.m.
  • Location: Meeting room #1, Postal Square Building (PSB), 2 Massachusetts Avenue, NW. Washington, D.C. 20012
  • Sponsor: Social and Demographic Section
Abstract:

The nation's system of publicly-provided unemployment insurance, which was created by the Social Security Act of 1935, serves two primary economic functions. First, it provides workerswith insurance against involuntary unemployment. Second, by maintaining the purchasing power of the unemployed, the system serves as an automatic stabilizer during economic downturns. The system's capacity to achieve both of these functions is determined, in large part, by the percentage of the unemployed who actually receive unemployment insurance benefits.

This paper explores the heretofore unexamined hypothesis that competition among the states to keep employer-financed Unemployment Insurance (UI) payroll taxes low (in an attempt to attract and retain employers) has contributed to the decline in percentage of unemployed who actually receive unemployment insurance benefits. The paper's findings suggest that interstate competition is, indeed, significant, and that competition has increased substantially in recent years. At the current levels of competition, the states do, indeed, appear to be locked into a race to the bottom. Return to top

Topic: Impacts of Sampling for Nonresponse Follow-up on Census Methodology for a One Number Census

  • Speaker: Michael Ikeda, Bureau of the Census
  • Discussant: Gregg J. Diffendal, Bureau of the Census
  • Chair: Julia L. Bienias, Bureau of the Census
  • Date/Time: Thursday, March 7, 1996, 12:30-1:30 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-2696) or Sandy West (202-606-7384) to be placed on the visitor's list.
  • Sponsor: Methodology Section
Abstract:

The 1995 Test Census is researching two fundamental changes to traditional U.S. census methodology - following-up only a sample of nonresponding households (NRFU sampling) and adjusting for coverage. The introduction of NRFU sampling with coverage adjustment creates the need to re-evaluate and change other traditional census methodology to ensure proper census estimates. This paper examines the implications of these fundamental changes for census search match methodology, census imputation methodology, and uses of late census data to obtain proper estimates; addresses resulting changes to these methodologies which are being implemented for the 1995 test; and proposes research and alternatives to these methodologies in an attempt to improve census estimates in the 2000 census. Return to top

Topic: Estimation of Variance Components for the U. S. Consumer Price Index

  • Speakers: Robert M. Baskin and William H. Johnson, Bureau of Labor Statistics
  • Discussant: Michael P. Cohen, National Center for Education Statistics
  • Chair: Julia L. Bienias, Bureau of the Census
  • Date/Time: Wednesday, March 13, 1996, 12:30-1:30 p.m.
  • Place: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-2696) or Sandy West (202-606-7384) to be placed on the visitor's list.
  • Sponsor: Methodology Section
Abstract:

The current sample of the Commodities and Services portion of the Consumer Price Index is allocated according to an optimization scheme which is based on certain parameters such as number of relevant sampling units and components of variance. The present work is to estimate four components of variance arising from the sampling design. In this paper model based estimates of the variance components are derived. The current results are contrasted with the results of the same project done for the 1987 revision. An attempt is made to investigate the efficiency in estimating the components of variance by these methods. Return to top

Title: MACROECONOMICS: PRO(jections) and CON(jectures)

  • Speaker: Ralph Monaco, Economist, Inforum Group, and Department of Economics, University of Maryland
  • Chair: Michael Horrigan, Bureau of Labor Statistics
  • Date\Time: Wednesday, March 20, 1996, 12:30-2:00 p.m.
  • Location: Meeting room #1, Postal Square Building (PSB). 2 Massachusetts Avenue, NW Washington, D.C.
  • Sponsor: Social and Demographic Section
Abstract:

Macroeconomic projections play a key role in the development of the federal budget outlook and in business planning. In this presentation, we examine macro projections from several viewpoints. First, we review current macro conditions and present a short-term outlook. Next we compare the outlook with those developed by CBO, CEA\OMB\Treasury, and a consensus of other private forecasters. We use these various forecasts to show that federal deficit differences between alternative budgets are not largely the result of differing assumptions concerning macroeconomic outcomes. Finally, we note that the current macro projections horizon is insufficiently long to address the key questions raised in the recent budget debates. Return to top

TOPIC: The Commerce Department-Wide Customer Satisfaction Survey: From Start to Finish

  • Speakers: Tracy R. Wellens, Bureau of the Census, and Frank A. Vitrano, Bureau of the Census
  • Discussant: Len Covello, Farm Services Agency, USDA
  • Chair: Linda Atkinson, Economic Research Service
  • Day/Time: Monday, March 25, 1996, 12:30 - 2:00 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Call Linda Atkinson (202-219-0934) to be placed on the visitors list.
  • Sponsor: Economics Section
Abstract:

In order for a government agency to provide the highest quality of customer service, it is first necessary to know the needs and expectations of the customer. Once this is known, action can be taken to better meet these needs and expectations. The development of an appropriate survey instrument and data collection methodology form the foundation of this dialogue between the agency and the customer. This paper will outline issues related to the development and implementation of a "generic" customer satisfaction survey which was used to assess the satisfaction of customers from 14 agencies (consisting of 20 operating units) within the Department of Commerce. These agencies offer diverse products and services, therefore, the task of developing one instrument for surveying satisfaction across these agencies was challenging. Moreover, this questionnaire was designed to obtain both agency-specific and departmental comparison information which lead to additional complexities. Several problems which were encountered when developing a frame of customers and how these problems were addressed will also be discussed. The aim of this paper is to highlight survey results and to discuss issues related to making comparisons among these distinct agencies. In addition, lessons learned from the process and suggestions for future research will be explored. Return to top

Topic: How many benchmarks?

  • Speaker: Chris Skinner
  • Date/Time: Thursday, April 4, 1996, 12:30 p.m.
  • Location: Room 2990, Postal Square Building, 2 Massachusetts Ave NE. Non-BLS attendees should call Sarah Jerstad [202-606-7390] by April 3 to be put on guard's list.
  • Abstract: The seminar will explore the selection of covariates for regression estimation in survey sampling.
< Return to top

TOPIC: The Competing Risks of Mortgage Terminations by Default and Prepayment

  • SPEAKER: Robert Van Order, Freddie Mac (Joint work with Yongheng Deng and John M. Quigley, both of Univ. of California, Berkeley)
  • CHAIR: Robert Avery, Federal Reserve
  • DATE/TIME: Thursday, April 11, 1996; 12:30 PM - 2:00 PM
  • LOCATION: Federal Reserve Board, Room M-3219, Martin Building, 20th and C Streets, NW (Orange/Blue Lines Foggy Bottom, or Red Line Farragut North). People outside the FRB, please call Linda Pitts at 202-736-5617 to have your name put on the list of attendees.
  • SPONSOR: Economics Section
Abstract:

Pricing Mortgage contracts is complicated by the behavior of homeowners who may choose to exercise their options to default or to prepay. These options are distinct, but not independent. In this paper, we present and estimate a unified economic model of the competing risks of mortgage termination by prepayment and default. We adopt a proportional hazards framework to analyze these competing and dependent risks in a model with time-varying covariates applied to a large sample of individual loans. The empirical model is computationally feasible only through the development of an alternative estimation technique based on semi-parametric methods (SPE). The SPE has several other advantages over the more familiar approaches when applied to this class of problem. The substantive results of the analysis provide powerful support for the contingent claims model which predicts the exercise of financial options. The results also provide strong support for the interdependence of the decisions to prepay and to default on mortgage obligations. Furthermore, the results indicate that liquidity constraints, investor preferences for risk, and investor sophistication also play important roles in the exercise of options in the mortgage market. Return to top

Topic: The Transformational Performance Management Model

  • Speaker: Dr. Joan E. Cassidy, President, Integrated Leadership Concepts, Inc.
  • Chair: Harold S. Johnson, Bureau of Labor Statistics
  • Date/Time: Tuesday, April 16, 1996, 12:30 - 2:00 p.m.
  • Location: BLS Conference and TrainingCenter, Postal Square Building, Suite G440, Room 9, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at First Street - immediately across from the Metro. Take elevator to Grould Level. Please call Harold Johnson (202-606-7758) to be placed on the visitor's list.
  • Sponsor: Quality Assurance Section
Abstract:

Never before in the history of Government has it been more managers to measure and manage performance better. Mangers need to be able to integrate disparate pieces of information in order to accurately measure outcomes. These outcomes need to be connected to customer satisfaction and cost reduction. Managers need to demonstrate that their products or services are wanted or needed, and that they can deliver them effectively and efficiently.

While many organizations attempt to measure effectiveness and efficiency, we have found that they have difficulty in determining what to measure; how to measure it; how to analyze the results; and most importantly, use the results to improve customer satisfaction while reducing costs. Focus here is on the use of a Transformational Performance Management Model (TPMM) to identify inefficiencies and to establish mechanisms of accountability that are fair, accurate, and relatively non-threatening. A major objective of this presentation will be on identifying appropriate tools and techniques for measuring and managing performance in each stage of the model. Return to top

Topic: Cosmeticization and Calibration of Sample Survey Estimators

  • Speaker: Ken Brewer, Australian National University
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Tuesday, April 16, 1996, 11:30 - 12:30 p.m.
  • Location: Bureau of the Census Auditorium. Call Barbara Palumbo at (301) 457-4892 by Friday April 12 to be put on the visitors' list. *** Note special place and time. ***
Abstract:

Cosmetic estimators (Sarndal and Wright, Scand. J. Statist. 1984) are ones that can readily be interpreted using either the design-based or the model-based approach to sampling inference. In two recent papers I have developed this approach by requiring that the two estimates of the regression coefficient as well as the two estimates of total should be equal.

In the second one (Pakistan J. Statist. 1994) I made use also of calibrated estimation (Deville and Sarndal, JASA 1992). Calibration ensures that the relationship between sample and estimator is the same as that achieved more directly by selecting a balanced sample (Royall and Herson, JASA 1973, two papers). Calibration bears the same relationship to balancing as poststratification bears to stratification - not as efficient, but more flexible.

Estimators that are both cosmetic and calibrated can be written in four equivalent forms, each of which carries its own interpretation. They seem to perform well enough in practice, and are particularly suitable for detecting and treating sample units whose auxiliary variable values indicate that they are "unrepresentative" of the population as a whole.

This program is being given jointly with the Census Bureau's Statistical Research Division seminar series. This program is physically accessible to persons with disabilities. Requests for sign language interpretation or other auxiliary aids should be directed to Barbara Palumbo (SRD), 457-4892 (v), 457-3675 (TDD). Return to top

Title: The Information at the Frontiers of Statistics

  • Speaker: Ehsan S. Soofi, University of Wisconsin-Milwaukee
  • Chair: Edward J. Wegman, George Mason University
  • Date/Time: Tuesday, April 16, 1996, 12:00 noon - 1:00 p.m.
  • Location: Staughton Hall 301, The George Washington University, 707 22nd Street, NW (Corner of 22nd and G Streets). Metro Stop: Foggy Bottom.
  • Sponsors: Washington Statistical Society, Departments of Operations Research and Management Science, The George Washington University and Department of Applied and Engineering Statistics, George Mason University
Abstract:

Since the publication of the seminal note, Kullback and Leibler (1951), there has been continual endeavor in statistics and related fields to explicate the existing methods and to develop new methods based on the concept of information. The logical foundations, elegance, and versatility of the information-theoretic approach to statistical problems have been increasingly attracting attention in the statistics community.

In this talk, I will present an overview of the information-theoretic statistics research being conducted at the frontiers of the field. I will synthesize a number of papers currently appearing or due to appear in the leading statistics journals or recently presented at various statistical meetings. The current research is focused on the use of information functions as the vehicles for measuring information in important practical statistical problems. The innovations include measures of the informational values of variables in multivariate data sets, measures of the informational worth of an observation relevant to a specific purpose, probability models reflecting given information in specific problems, and information diagnostics for probing the adequacy of a constructed model as an approximation to an unknown underlying data-generating distribution.

"We shall use information in the technical sense to be defined, and it should not be confused with our semantic concept, though it is true that the properties of the measure of information following from the technical definition are such as to be reasonable according to our intuitive notion of information" (Kullback 1959). Return to top

Title: New Measures of Contingent and Alternative Employment

  • Speaker: Thomas Nardone and Anne Polivka, Bureau of Labor Statistics
  • Chair: Michael Horrigan, Bureau of Labor Statistics
  • Date/Time: Wednesday, April 17, 1996, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, Meeting room #1, Postal Square Building (PSB) 2 Massachusetts Avenue, NW, Washington, D.C. 20012 Metro: Red Line/Union Station. Enter at First Street NE. Please call Michael Horrigan 606-5905 to be placed on the visitorßs list.
  • Sponsor: Social and Demographic Section
Abstract:

It has been argued that in order to control costs, firms are increasingly seeking more flexibly in their use of labor. Employers have sought this additional flexibility within in their own workforces, as well as from sources outside their organization. Anecdotal evidence of the trend toward more flexible employment arrangements is fairly extensive; gauging the extent of such employment in the labor force as a whole, however, has been more problematic. The two major monthly surveys of employment--the Current Population Survey (CPS) and the Current Employment Statistics survey (CES)--do not directly measures the "permanence" of the employment relationship. In February 1995, therefore, the U.S. Bureau of Labor Statistics sponsored a special supplement to the CPS to estimate the number of workers in contingent jobs, that is, jobs which are structured to last only for a limited period of time. Also collected in the survey were estimates of the number of workers in several alternative employment arrangements, including those working as independent contractors and on-call workers, as well as those working through temporary help agencies or contract companies. The presentation will provide an overview of how the existence of contingent employment and alternative work arrangements were discerned from the supplement as well as what the results suggest about the discussion of job security in the U.S. labor market.

Topic: A Bayesian Treasure Hunt in Flatland
  • Speaker: Ken Brewer, Australian National University
  • Chair: Phil Kott, National Agricultural Statistics Research Service
  • Date/Time: Thursday, April 18, 1996, 12:30 - 2:00 p.m.
  • Location: BLS Conference Center, Postal Square Building, Room G440 meeting room 10, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station).
  • Sponsor: Methodology Section
Abstract:

Stone (JASA 1976) challenged Bayesian statisticians to solve the following problem without incurring strong inconsistencies from the use of a uniform prior: ßThe whole of Flatland has been developed along the lines of a North American city; blocks, streets (north-south and east-west), and intersections. A soldier and a woman wander erratically through the streets of Flatland without immediately retracing their steps. As they proceed, they let out a taut continuous thread attached to [their starting point at intersection] O, which passes through each intersection visited by them between O and T, where T denotes the intersection where, after a stochastically long time, they come to a halt. At the center of T they make a completely random choice of one of the four cardinal directions, which points them towards the adjacent intersection. Leaving an invisible but valuable treasure at T, [they] proceed to , possibly retracing the last step of their path between O and T. In the case of retracing they take up the slack in the thread between O and, in any case, leave it attached to . The final taut state of the thread between O and removes any direct evidence of the location of T, and the inference problem is: Which of the four intersections adjacent to is T?ß Stone provided a figure showing "a possible outcome of the procedure in which the treasure was left at the intersection adjacent to and east of ." He also gave three guesses, which reduced the problem to that of finding the least probable position for T. None of the Bayesian solutions offered satisfied Stone. A recent reconsideration of the problem has shown a number of interesting results which will be discussed. Return to top

Topic: A Conditional Power Trip on Fisher's LSD

  • Speaker: Michael Proschan, National Heart, Lung and Blood Insitute
  • Date/Time: Tuesday, April 23, 1996, 1:30 - 2:30 p.m.
  • Location: Executive Plaza North, 6130 Executive Plaza Boulevard, Rockville MD. From Bethesda go North on Old Georgetown rd about 4-5 miles. Executive Plaza Boulevard intersects Old Georgetown. Turn right onto Executive Plaza Boulevard.
  • Sponser: Public Health and Biostatistics
Abstract:

Conditional power calculations are very useful for assessing the likelihood of an ultimately statistically significant result given the current data. I show how to calculate conditional power in a multi-armed trial using the classic Fisher Least Significant Difference (LSD) procedure. I obtain upper bounds on the probability of Type 1 and Type 2 errors analogous to those in two-armed trials, and show that these bounds are equalities under continuous monitoring. Return to top

Topic: Using Noise for Disclosure Limitation of Establishment Tabular Data

*** Rescheduled from January 9 snow day ***
  • Speaker: B. Timothy Evans, Bureau of the Census
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Wednesday, April 24, 1996, 10:00 - 11:00 a.m.
  • Location: Bureau of the Census, Conference Rooms 1 & 2. Call Barbara Palumbo at (301) 457-4892 by Monday April 22 to be put on the visitors' list.*** Note special place and time. ***
Abstract:

The Bureau of the Census is looking into new methods of disclosure limitation for use with establishment tabular data. Currently we use a strategy that suppresses a cell in a table if the publication of that cell could potentially lead to the disclosure of an individual respondent's data. In our search for an alternative to cell suppression that would allow us to publish more data, we are considering the option of adding noise to our underlying microdata. By perturbing each respondent's data, we can provide protection to individual respondents without having to suppress cell totals.

Adding noise has several advantages over cell suppression. Cell suppression is a complicated and time-consuming procedure. Also, cell suppression must be performed separately for each data product, whereas noise need only be added once. The question remains, however, as to the utility of the data after noise is added. In this paper we discuss the advantages and disadvantages of adding noise to microdata, with respect to both disclosure issues and the behavior of estimates.

This program is being given jointly with the Census Bureau's Statistical Research Division seminar series. This program is physically accessible to persons with disabilities. Requests for sign language interpretation or other auxiliary aids should be directed to Barbara Palumbo (SRD), 457-4892 (v), 457-3675 (TDD). Return to top

Topic: Replicate Variance Estimation in Stratified Sampling with Permanent Randon Numbers

  • Speakers: Fritz Scheuren, George Washington University (Work with Susan Hinkins, IRS, and Chris Moriarity, NCHS)
  • Discussant: Phil Kott, USDA
  • Chair: Julia L. Bienias, Bureau of the Census
  • Sponsor: Methodology Section
  • Date/Time: Thursday, May 2, 1996, 12:30 - 1:30 p.m.
  • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Juilia Bienias (301-457-2696) to be placed on the visitor's list.
Abstract:

A permanent random number (e.g., Sunter 1986, Survey Methodology; Harte, 1986) is a convenient way of sampling from administrative lists, especially in a computer file. The frame typically has an identifying number (supposedly unique) assigned in some systematic, usually nonrandom fashion (e.g., ID number used internally by an agency). Administratively this number is not supposed to change over time. For use in sampling, the number can be transformed by a conventional pseudo-random number generator device and used in building a cohort of random cases.

Permanent random numbers afford many advantages from cost savings to variance reductions, especially in time series data. Problems arise in variance estimation, however, for all but the simplest cases. Ways to deal with these will be explored. The context will be a large-scale simulation experiment but the presentations will also contain some of the history of this problem and even a little theory. Return to top

Topic: Issues in Adaptive Sampling

  • Speaker: Steven Thompson
    Pennsylvania State University
    • Chair: Mary C. Christman, American University
    • Date/Time: Tuesday, May 7, 1996, 3:30 - 4:30 p.m. Reception to follow.
    • Location: American University, Board Room, 6th Floor Butler Pavilion, Washington, D.C. On campus metered parking is available or take the Metro red line to Tenleytown-AU and take the AU shuttle bus. The Butler pavilion is located on Freidheim Quad just next to the Mary Graydon Center. Call 202-885-3120 for more information.
    • Sponsors: American University, College of Arts and Sciences, Department of Mathematics and Statistics, and the Washington Statistical Society
    Abstract:

    Adaptive sampling designs are those in which the procedure for selecting the sample may depend on observed values of the variable of interest. For example, in a survey of a rare animal species, neighboring sample sites may be observed whenever animals of the species are encountered. In a survey of a rare, contagious disease, whenever someone with the disease is found in the sample, socially-linked individuals may be added to the sample. Examples of adaptive procedures include adaptive allocation designs and adaptive cluster sampling. Some practical and theoretical issues in adaptive sampling will be illustrated. Return to top

    Topic: Design To Diagnosis: A Model Based Approach To Analyzing Multiple Indicator Outcomes

    • Speaker: Dr. Karen Bandeen-Roche, Johns Hopkins University
    • Chair: Julie M. Legler, Sc.D.
    • Date/Time: Wednesday, May 8, 1996, 12:30 - 1:30 p.m.
    • Location: Executive Plaza North Conference Room North, 6130 Executive Boulevard, Rockville MD.
    • Sponsor: Public Health and Biostatistics Section
    Abstract:

    Increasingly in health studies, outcomes are measured as multiple indicators. In analyzing such outcomes vis-a-vis risk factors, interpretation requires summary of information provided by individual responses. We propose an approach based on the Dayton/Macready (JASA 1988) latent class regression model that summarizes responses and estimates risk factor associations in a single step. Results include: 1) convenient fitting and standard error calculation; 2) diagnostics to assess empirical fit and validity of model assumptions; 3) effects of sample and covariate design on summary precision and diagnostic ability; 4) new methodology for longitudinal data analysis, and 5) theory with practical implications for 1)-4). We illustrate our methods using data on functioning in older persons. Our work has implications for describing associations parsimoniously, evaluating whether indicators effectively summarize a given outcome, and selecting improved outcome measures. Return to top

    Topic: The Prevalence of Answering Machine Usage in Agricultural Survey Populations

    • Speakers: Terry O'Connor and Jaki Stanley, National Agriculatural Statistics Service, United States Department of Agriculture
    • Chair: Michael Steiner, National Agricultural Statistics Service
    • Date/Time: Wednesday, May 8, 12:30 - 2:00 p.m.
    • Location: USDA South Building, Room 4302, 12th @ Independence Ave. S.W., Washington D.C. (Blue/Orange Line --- Smithsonian Station) Enter at Corner of 12th and Independence Ave.
    • Sponsor: Agriculture and Natural Resources Section
    Abstract:

    Answering machine usage was tracked for a large National CATI Agricultural Survey. Whether an attempted contact ever reached an answering machine and the final disposition of CATI sample unit was recorded for four successive quarters of data collection. Respondents do not appear to use answering machines to screen calls; the proportion of refusals among answering machine contacts was the same as those respondents where answering machines were never contacted. The prevalence of answering machines varies by state. However, at the present time, there appears to be little adverse impact on response rates in agricultural surveys due to answering machines. Return to top

    Topic: Modeling Multivariate Discrete Failure Time Data

    • Speaker: Joanna H. Shih, Office of Biostatistics Research, National Heart, Lung, and Blood Institute
    • Chair: Sally Hunsberger
    • Date/Time: Monday, May 13, 1996, 1:30 - 2:30 p.m.
    • Location: Executive Plaza North, 6130 Executive Plaza Boulevard, Rockville MD. From Bethesda go North on Old Georgetown Rd about 4-5 miles Executive Plaza Boulevard intersects Old Georgetown. Turn right onto Executive Plaza Boulevard.
    • Sponsor: Public Health and Biostatistics Section
    Abstract:

    We propose a bivariate discrete survival distribution which allows flexible modeling of the marginal distributions and yields a constant odds ratio at any grid point. This distribution is readily generalized to accommodate covariates in the marginal distributions as well as in the odds ratios, and can be used to model the pairwise association of the multivariate failure times. In addition, we propose a pseudo-likelihood estimation procedure for estimating the regression coefficients in the margins and the association parameters in the odds ratios. Through simulations, we compare the performance of the estimates with maximum likelihood estimates for bivariate discrete failure time data. Pseudo-likelihood estimate of the association parameter has high efficiency. Loss of efficiency in the marginal regression coefficient estimates is small when the association is not strong. For both the marginal regression coefficients and association parameter, coverage probabilities are close to the 95% nominal level. We apply the proposed methods to a subset of Framingham Study data for illustration. Return to top

    Topic: Quality Improvement Driven Government

    • Speakers: Lisa Miller and David Wilkerson, Coopers & Lybrand
    • Chair: Amrut Champaneri
    • Date/Time: Thursday, May 23, 1996, 12:30 - 2:00 p.m.
    • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, N.E., Washington, D.C. (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Please call Harold Johnson (202-606-7758) to be placed on the visitor's list.
    • Sponsors: Quality Assurance Section
    Abstract:

    In a 1994 study of management practices most associated with gaining positive results in improvement, innovation and customer satisfaction, government organizations consistently reported that they were less likely than private companies to carry out the best management practices described in the survey. Coopers & Lybrand has developed a framework known as the Improvement Driven Organization to help government agencies manage two challenges: (1) to plan and execute a major process improvement and (2) to completely transform an entire organization so that it is improvement driven in all its parts. There are three phases in the application of the framework: a leader driven phase, a process driven phase, and an improvement driven phase.

    The speakers will present the framework developed by Coopers & Lybrand and discuss some of the tools used in implementing the framework in government agencies. Return to top

    Topic: Knowledge-based Classification of Survey Data

    • Speaker: Fred Conrad, Bureau of Labor Statistics
    • Discussant: Mick Couper, University of Maryland
    • Chair: Sandra A. West, Bureau of Labor Statistics
    • Date/Time: Wednesday, May 29, 1996, 12:30 - 2:00 p.m.
    • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Please call Sandy West (202-606-7384) to be placed on the visitor's list.
    • Sponsor: Methodology Section
    Abstract:

    In order to analyze most survey responses, the data must be assigned to categories. Open ended responses are usually classified by coding specialists, often after the raw data are collected. This talk distinguishes between simple and complex classification and argues that expert system software is particularly good at supporting the second of these. Simple classification usually involves locating the response in some sort of dictionary where it is explicitly mapped to a particular category. Advances in automated coding of this sort are first reviewed. Then complex classification is presented as a task requiring the coding specialist's expertise about the content area. This is illustrated with two prototype expert systems developed at the Bureau of Labor Statistics. The first embodies specialists' knowledge of occupational classification in the Occupational Compensation Survey Program. It was designed to support classification decisions during an interview. The second prototype expert system is based on specialists' knowledge of particular product areas. It checks the specialists' decisions when they review commodity substitutions in the Consumer Price Index. Finally, some of the practical considerations of implementing expert systems in survey organizations are presented. Return to top

    Topic: Changing Inequality in the U.S. from 1980-1994: A Consumption Viewpoint

    • Speaker: David S. Johnson, U.S. Bureau of Labor Statistics
    • Discussant: Pat Ruggles, Minority Staff, Joint Economic Committee
    • Chair: Bob Williams, Congressional Budget Office
    • Day/Time: Tuesday, June 11, 1996, 12:30-2:00 p.m.
    • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Call Linda Atkinson (202-219-0934) to be placed on the visitors list.
    • Sponsor: Economics Section
    Abstract:

    Consumption and income inequality increased during the 1980s despite the fact that the economy experienced its longest economic expansion. Growth in aggregate measures of the economy appeared to have a narrowing effect on income inequality and seems to support the often mentioned cliche that rising tides lifts all boats. However this relationship did not hold up during the 1980s. Many studies suggest that the 1980s, especially the early 1980s, were different.

    In this paper (coauthored with Stephanie Shipp), we examine the increase in income and consumption inequality by comparing the trends in inequality to macroeconomic variables. Much of the previous literature uses annual measures of aggregate economic performance and the distribution of income. In this paper, we make use of the ongoing nature of the Consumer Expenditure Survey data to obtain quarterly income and consumption data. These quarterly data are used to track inequality vis a vis the business cycle during the 1980s and early 1990s. Similar to previous studies, we find that the change in inequality has been different since 1980. While the effect of unemployment on consumption inequality is relatively small, inflation has a significant increasing effect on inequality. We also examine the differences between income and consumption inequality and find that income inequality is more volatile than consumption inequality and is also about 50% higher. Return to top

    PRESIDENTIAL INVITED ADDRESS

    Topic: Towards a Unified Federal Approach to Statistical Confidentiality

    • Speaker: Katherine K. Wallman, Chief Statistician of the United States, Office of Management and Budget
    • Discussants:
      Joe Cecil, Federal Judicial Center
      Tom Jabine, Independent Consultant
      David McMillen, Staff, House Committee on
      Government Reform and Oversight Operations
      • Chair: Ron Fecso, NASS, President, Washington Statistical Society
      • Date/Time: Tuesday, June 18, 1996, 3:00 - 4:30 p.m. (Reception 4:30 - 5:00)
      • Location: Room 227, Ross Hall, George Washington University, 2300 I Street, NW, Washington, DC (adjacent to Foggy Bottom Metro)
      • Sponsor: Statistics and Public Policy Section
      Abstract:

      Congress has recognized that a confidential relationship between statistical agencies and their respondents is essential for effective statistical programs. However, the specific statutory formulas devised to implement this principle have, in some cases, created barriers to effective working relationships among the statistical agencies. OMB recently prepared an order designed to clarify, and to make consistent, government policy protecting the privacy and confidentiality interests of individuals and organizations who provide data to Federal statistical programs. The order aims to resolve a number of ambiguities in existing law and to give additional weight and stature to policies that statistical agencies have pursued for decades. In a companion initiative, OMB has prepared a legislative proposal for a "Statistical Confidentiality Act" that makes prudent changes to existing laws that respect the privacy and confidentiality concerns of the public while making responsible improvements in the way statistical agencies operate in the public interest. This session will describe the nature of these recent initiatives and will discuss their implications for the Federal statistical community. Return to top

      Topic: Time Series Decomposition of Periodic Survey Data with Autocorrelated Errors

      • Speakers: Dick Tiller, Bureau of Labor Statistics
      • Chair: Sandra A. West, Bureau of Labor Statistics
      • Date/Time: Wednesday, June 19, 1996, 12:30 - 2:00 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Please call Sandra West (202-606-7384) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      Statistical agencies routinely seasonally adjust a large number of time series based on relatively small samples from periodic surveys with an overlapping design. This type of design generates strong positive autocorrelations in the estimates of key population characteristics which alters the nature of the time series decomposition in a fundamental way. It is not possible to identify the unobserved population components from information on the aggregate survey series alone. Thus, direct application of conventional methods of time series decomposition may fail to adequately separate sampling error (SE) from the population subcomponents. The missing information is the SE structure. From a model based perspective, the solution is straight forward. SE is treated as an additional unobserved component of the time series, with the special advantage that it is objectively identified by design information. Given this information, a filter is constructed to suppress SE variation along with the seasonal and irregular noise in the population. Most statistical agencies use X-11 and may find full model based estimation difficult to apply to a large number of series. Accordingly, the effects of SE on X11 estimators are examined and ways of adjusting X11 are explored. Data from the Current Population Survey are used as examples. Return to top

      Topic: A New Algorithm For Nonlinear Least Squares Estimation: Application To The Langmuir Model

      • Speaker: S. K. Katti, Professor Emeritus, University Of Missouri
      • Chair: Sandra A. West, Bureau of Labor Statistics
      • Date/Time: Wednesday, June 26, 1996, 12:30-2:00 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Please call Sandra West (202-606-7384) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      It is recognized that one must "massage" the data/parameters before using any algorithm. We give our own style. In the maximum descent method, regardless of the number of parameters, the maximum descent line is one dimensional, written as a function of a scalar parameter. For a nonlinear case, the graph of the norm can be quite ghastly. From the computer angle, the best approach to take is to get numerical minimum along this line. For the many data sets and models tried, the number of iterations was barely five. To keep the discussion simple, we stay with the Langmuir model and the method of maximum descent. Other variations on the model as well as the minimizing methods, work out along very parallel lines. Return to top

      Topic: A Permutational Step-Up Method of Testing Multiple Outcomes.

      • Speaker: James Troendle
      • Chair: TBA
      • Day/Time: Wednesday, September 4, 1996, 11:00 a.m.
      • Location: National Cancer Institute, 6130 Executive Boulevard, Conference Room G.
      • Sponsor: Biostatistics Section
      Abstract:

      A permutational step-up multiple comparison procedure to adjust the p values from k related hypotheses is described. The method is applicable when two groups of subjects are being compared on each of k outcomes. It is related to the step-down method of Westfall and Young (1993) and Troendle (1995), and is an alternative to the analytic step-up method of Dunnett and Tamhane (1992), which requires a specific distribution and correlation structure. By conditioning on the data observed, this permutational method avoids any specific distribution or correlation assumption. It is shown very generally that the method asymptotically controls the familywise probability of a type I error. Return to top

      Topic: Effects of Time and Salience on Expenditure Recall Accuracy in Diary Surveys

      • Speaker: Monica Dashen, Bureau of Labor Statistics
      • Chair: Julia L. Bienias, Bureau of the Census
      • Date/Time: Wednesday, September 18, 1996, 12:30-1:30
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue.
      • Sponsor: Methodology Section
      Abstract:

      Diary surveys often ask respondents to record events on a daily basis (e.g., Consumer Expenditure Survey). It has been assumed that recall error typically associated with interviews is reduced in diaries because information is recorded soon after it happens. However, recall error can arise in diary surveys during a follow-up interview where the respondent is required to report purchases omitted during the diary keeping period. The time lapse between the interview and date of purchase may influence how well respondents are able to recall their expenditures during the interview.

      The present work examined the effects of time and salience on reporting performance for items or purchases. These effects were examined by asking forty-eight respondents to recall their diary entries in a given time lapse of 1-, 2-, 4-weeks. The reported items were scored against the recorded items. Effects of time and expenditure salience on reporting performance was observed. Increased intrusion errors were observed after 1 week. These results underscore the importance of memory as a contributing factor to measurement error in diaries. Return to top

      Topic: Intervention Analysis with Control Groups

      • Speaker: Andrew Harvey, Dept. Of Statistics, London School of Economics
      • Discussant: TBA
      • Chair: TBA
      • Date/Time: Wednesday, September 25, 1996, 12:30 - 1:30 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue.
      • Sponsor: Methodology Section
      Abstract:

      Control groups may be used in situations where there are time series observations. However, there appears to be no systematic treatment of the statistical issues in the literature. Structural time series models are formulated in terms of unobserved components, such as trends and seasonals, which have a direct interpretation, and multivariate structural time series models are shown to provide an ideal framework for carrying out intervention analysis with control groups. They also facilitate analysis of the potential gains from using control groups and the conditions under which single equation estimation is valid. Return to top

      Topic: Markov Models for Longitudinal Data from Complex Surveys

      • Speaker: Dwight Brock, National Institute on Aging
      • Chair: Julia L. Bienias, Bureau of the Census
      • Date/Time: Tuesday, October 8, 1996, 12:30 - 1:30 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue. Call Sarah Jerstad (202-606-7390) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      Three practical aspects of recent epidemiologic research have led to the use of complex sample survey designs for long-term observational studies. First, policy development may require valid estimates for subgroups such as minorities or the very old, leading to stratified designs. Second, the statistical power in some studies may depend on the number of prevalent or incident cases identified, suggesting the use of two-phase designs that oversample from high-risk subgroups identified in the first phase. Finally, clustering may be used to reduce cost.

      We consider the situation in which repeated observations of a classification of status are made for persons in a large, complex sample, with possible states including death and two functional classifications for those still alive. In particular, we seek to develop a theoretical model for the repeated observations, including death as an absorbing state and summarizing the influence of covariates on the probability of becoming disabled, recovering from disability, and dying. We develop statistical procedures for estimating the probability of transitions, the coefficients for each covariate, and the standard errors of the estimates, adjusting for the complex sample design. Finally, we illustrate the use of the methods in a complex sample setting using physical function data from the Established Populations for Epidemiologic Studies of the Elderly (EPESE) of the National Institute on Aging. Return to top

      Topic: Retrospective Recall: A Review of the Effects of Length of Recall on the Quality of Survey Data

      • Speaker: Nancy Mathiowetz, Joint Program in Survey Methodology, University of Maryland
      • Date/Time: Thursday, October 10, 12:30 - 1:30 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue. If you wish to attend, call Linda Stinson (202) 606-7528 to have your name placed on a list for the building guards. You will also need your driver's license or other photo ID to enter the building.
      • Sponsor: Data Collection
      Abstract:

      One of the tenets among survey researchers is the belief that, for reports of factual information, an increase in the length of the recall period (e.g., a four month recall period compared to a three month recall period) will result in poorer quality data (e.g., Sudman and Bradburn, 1973). The research reviewed in this presentation addresses the hypothesis that increasing the length of recall period results in a decline in the quality of data. While there has been abundant evidence to support the hypothesis that the quality of retrospective recall declines as the length of the recall period increases, there are also a number of research findings that suggest little to no difference in the quality of retrospective recall as a function of the length of the recall period. It appears from the literature that the impact of the length of recall is a complex interaction among three primary factors: the length of the recall period, the nature of the autobiographical experience, and the type of response task (e.g., estimation vs. retrieval of episodic details). Return to top

      1996 Roger Herriot Award Methodology Session

      • Chair: Dan Kasprzyk, National Center for Education Statistics
      • Presentations:
        Integrated Survey Designs
        Jim Massey, Mathematica Policy Research

        Modern Cognitive Methods
        Betsy Martin, Bureau of the Census

        Network Sampling
        Barry Graubard, National Cancer Institute

        Federal Committee on Statistical Methodology
        Wray Smith, Synectics

      • Date/Time: Tuesday, October 15, 12:30 - 2:00 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE,Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue. Call Sarah Jerstad (202-606-7390) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      This session is to honor the innovative work of Monroe Sirken, the 1996 Roger Herriot Awardee. Methodological papers will be given on four of the major statistical areas where Dr. Sirken has contributed over the years: network sampling, integrated survey design, modern cognitive methods in surveys, and the OMB Federal Committee on Statistical Methodology. The speakers will look at current developments in their respective areas and discuss Monroe's continuing contributions. We hope to have a small reception afterwards. Please come and join us. Return to top

      Topic: Adjusting for a Calendar Effect in Employment Time Series

      • Speakers: Stuart Scott and George Stamas, Bureau of Labor Statistics
      • Discussant: Michael L. Cohen, National Academy of Sciences
      • Chair: Brian Monsell, Bureau of the Census
      • Day/Time: Tuesday, October 22, 1996, 12:30 - 2:00 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Call Linda Atkinson (202-219-0934) to be placed on the visitors list.
      • Sponsor: Economics Section
      Abstract:

      The Bureau of Labor Statistics Current Employment Statistics (CES) program is a monthly survey of nearly 400,000 business establishments nationwide; its reference period is the pay period including the 12th of each month. Of paramount importance to most CES data users are over-the-month changes in total nonfarm employment levels; thus, the seasonal adjustment of these data critically affects the analysis of national employment trends. This study investigates a calendar effect which may cause difficulties in interpreting movements in the current seasonally adjusted series. The effect arises because there are sometimes 4 and sometimes 5 weeks between the reference periods in any given pair of months (except February/March). This varying interval effect is estimated by an ARIMA model with regression variables, using the Bureau of the Census X-12-ARIMA procedure. Estimates of the interval effect are tested for significance. Series estimated with and without the interval adjustment are compared for smoothness and stability across successive time spans. Differences between over-the-month employment changes as previously published and as estimated with interval effect modeling are examined. Return to top

      Topic: Frequency Valid Multiple Imputation For Surveys With a Complex Design

      • Speaker: David A. Binder, Statistics Canada
      • Chair: Karol Krotki, Education Statistics Services Institute
      • Discussant: Phil Kott, National Agricultural Statistics Service
      • Date/Time: Wednesday, October 23, 1996, 12:30 - 1:30 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue. Call Sarah Jerstad (202-606-7390) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      General conditions required for valid design-based inference when using multiple imputation for dealing with missing values are considered. We focus on the means or totals and the estimation of their variances. We study multiple and proper imputation under a general setting, concentrating on the mathematical and statistical conditions required for valid design-based inference, assuming the nonresponse mechanism is an additional phase of sampling. Return to top

      SIXTH ANNUAL MORRIS HANSEN LECTURE

      • Title: The Hansen Era: Statistical Research at the Census Bureau, 1940-1970
      • Speaker:
        Joseph Waksberg
        Chairman of the Board
        WESTAT
      • Date/Time: Wednesday, October 30, 1996, 3:30 p.m.
      • Place: Jefferson Auditorium, South USDA Building, Independence Avenue, S.W., between 12th & 14th Streets Washington, D.C. Smithsonian Metro Stop. (Handicapped entrance at 12th & Independence)
      • Co-Sponsor: National Agricultural Statistics Service, USDA
      • Welcome: Rich Allen, National Agricultural Statistics Service
      • Chair: Daniel Levine, Consultant
      • Discussants:
        Dr. Margo Anderson
        Department of History
        University of Wisconsin-Milwaukee

        Dr. Robert Groves
        Joint Program in Survey Methodology
      • Reception: Following the lecture nominally 5:30 to 6:30 p.m. at the patio of the USDA's Jamie L. Whitten Building
      Abstract:

      Mr. Waksberg will describe the development of sampling theory and related survey methods at the Census Bureau from about 1940 to 1970. The research carried out during this period of time and its application to survey methodology have profoundly affected how surveys are carried out. The talk will discuss the principal research findings, the milieu and context in which the research was carried out, the main participants, the original motivation for the direction of the research and later influences, and the effect of research on methods used for censuses and sample surveys. There will also be a brief review of the first applications of computers to statistical work carried out at the Census Bureau. Mr. Waksberg will provide some personal reminiscences which illustrate the fact that the mathematical statistician on a project needs to pay attention to broader issues in a survey than on sampling methods alone, e.g., on cost, other sources of error, the management structure of the organization, how the data will be used, and alternative survey methods that are available. Return to top

      Topic: Credit Risk, Credit Scoring, and the Performance of Home Mortgages

      • Speaker: Robert B. Avery and Raphael Bostic, Federal Reserve
      • Chair: Arthur Kennickell, Federal Reserve
      • Date/Time: Thursday, October 31, 1996; 12:30 - 2:00 p.m.
      • Location: Federal Reserve Board, Room M-3219, Martin Building, 20th and C Streets, NW (Orange/Blue Lines Foggy Bottom, or Red Line Farragut North). People outside the FRB, please call Linda Pitts at 202-736-5617 or email m1lkp99@frb.gov to have your name put on the list of attendees.
      • Sponsor: Economics Section
      Abstract:

      There has been a substantial recent growth in the use of "credit scoring models" to quantify and assess the credit risk of loan applicants. This paper presents new information on the distribution of credit scores across population groups and how credit scores relate to the performance of loans. In addtion, the paper takes a special look at the statistical and regulatory issues related to the increased usage of scoring in the credit industry. Return to top

      Topic: Sampling and Estimation from Multiple List Frames

      • Speakers: John Amrhein and Phil Kott, USDA/NASS
      • Discussant: TBA
      • Date/Time: Thursday, November 7, 1996, 12:30 - 2:00 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue. Call Sarah Jerstad (202-606-7390) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      Many economic and agricultural surveys are multi-purpose. It would be convenient if one could stratify the target population of such a survey in a number of different ways to satisfy a number of different purposes. Unfortunately, the expansion estimator based on a combination of independently drawn, stratified simple random samples can be notoriously inefficient. Bankier (JASA 1986) has shown how raking can remedy this problem. We build on Bankier's work in a number of directions.

      We first explore four different sampling methods that select similar samples across all stratifications thereby reducing the overall sample size. Data from an agriculture survey is used to evaluate the effectiveness of these alternative sampling strategies. We then show how a calibration (i.e., reweighted) estimator can increase statistical efficiency by capturing what is known about the original stratum sizes in the estimation. Bankier's raking procedure is simply one method of calibration. Variance estimation for our proposed calibration estimator is discussed from both the design- and model-based viewpoint. Return to top

      Topic: The Impacts of Two-Year College on the Labor Market and Schooling Experiences of Young Men

      • Speaker: Brian J. Surette, Federal Reserve
      • Chair: Arthur Kennickell, Federal Reserve
      • Date/Time: Wednesday, November 13, 1996, 12:30 - 2:00 p.m.
      • Location: Federal Reserve Board, Room M-3319, Martin Building, 20th and C Streets, NW (Orange/Blue Lines Foggy Bottom, or Red Line Farragut North). People outside the FRB, please call Linda Pitts at 202-736-5617 or email m1lkp99@frb.gov to have your name put on the list of attendees.
      • Sponsor: Economics Section
      Abstract:

      Over fifty percent of college freshman attend two-year colleges. Using data from the National Longitudinal Survey of Youth, this paper examines how attending such schools influences the wages, employment, and subsequent college attendance decisions of young men. The paper models decisions using a maximum likelihood framework in which lagged dependent variables and a rich set of background measures explain current outcomes. Discrete-factor, random effects estimators control for the impacts of unobserved variables that persist either across time or across outcomes. These semi-parametric techniques provide consistent and efficient estimates of the overall benefits of two-year college attendance. Not surprisingly, estimates obtained from this more rigorous model differ substantially from estimates that do not explicitly address unobserved heterogeneity. Overall, the paper strongly supports the conclusion that two-year colleges successfully prepare students for both further schooling and the labor market. Return to top

      Topic: The Impact of Incentives in a Government Survey

      • Speaker: Carolyn Shettle, National Science Foundation
      • Discussant: Jerry Coffey, Office of Management and Budget
      • Date/Time: Tuesday, November 19, 1996, 12:30 - 1:30
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street and Massachusetts Avenue. Call Sarah Jerstad (202-606-7390) to be placed on the visitor's list.
      • Sponsor: Methodology Section
      Abstract:

      This paper examines the impacts of using a small ($5.00) monetary incentive in a lengthy mail survey of moderate saliency. The paper contrasts two experimental groups differing only in terms of whether sample members received a $5 monetary incentive in the first mailing of the survey. This paper provides analyses of work done in the 1992 pretest of a survey designed by the National Science Foundation, Mathematica Policy Research,Inc., and the Census Bureau and conducted by the Census Bureau. The treatment groups are contrasted on the following factors: (1) response rate; (2) non-response bias; (3) data quality; (4) indicators of respondents' views of incentives; and (5) cost-effectiveness. On the basis of the information in this study and the literature reviewed, the authors believe that incentives can provide a cost-effective survey tool for use in government surveys when moderately high response rates are needed. This study and the other studies reviewed do not indicate that the potential savings of incentives are paid for through increased non-response bias, decreased data quality, or respondent ill will. Return to top

      Topic: Election Night Projections

      • Chair: Douglas A. Samuelson, InfoLogix, Inc.
      • Speaker: Jack Moshman, Moshman Associates, Inc.
      • Date/Time: Wednesday, November 20, 1996, 6:00 - 8:00 p.m.
      • Location: ANSER conference center, Suite 800, Crystal Gateway 3, 1215 Jefferson Davis Highway, Arlington, VA (2 blocks north of Crystal City Metro station; plenty of free street parking available nearby, on Crystal Drive)
      • Wine and cheese at 6:00 p.m., speaker at 6:30 p.m. Interested persons may join the speaker for dinner afterwards at a nearby restaurant. For further information contact Doug Samuelson, (703) 978-5030, e-mail: dsamuel@seas.gwu.edu.
      • Sponsor: Joint program with Washington OR/MS Council (WORMSC)
      Abstract:

      Dr. Moshman, who invented the statistical method of projecting state-by-state election night results in 1960, will review the current state of the subject and discuss the results and lessons learned in the 1996 projections. Return to top

      Topic: How GPRA will affect you: A panel discussion on the policy implications of the Government Performance and Results Act

      • Panelists:
        Carolyn Bursetin, Consultant, Ivy Planning Group (Retired this September from Patient and Trademark Office; formerly Senior Executive with Federal Quality Institute)

        Nancy Kirkendall, Chief Statistician, Office of Statistical Standards, Engery Information Administration, Department of Energy ( Panel Chair)

        J. Christopher Mihm, Assistant Director for Federal Management and Workforce Issues, Government Accounting Office

        Christopher Wye, Director, Program on Improving Government Performance, National Academy of Public Administration

      • Date/Time: Tuesday, December 3, 1996, 3:00 - 4:30 p.m.
      • Location: BLS Conference and Training Center, Postal Square Building, 2 Massachusetts NE, Washington, DC (Red Line -- Union Station). Enter at 1st Street. Conference Center on the ground level. Call Harold Johnson (202-606-6630, Extension 318) to be placed on the visitor's list. You may also e-mail him: Johnsonh@OA.PSB.BLS.GOV.
      • Sponsor: Public Policy and Statistics Section
        Carolee Bush (DOT) and N. Phillip Ross (EPA), Co-Chairs
      Abstract:

      Enactment of the Government Performance and Results Act (GPRA) in 1993 underscored a shift away from such traditional concerns as staffing and activity levels toward the overriding issue of results. GPRA requires agencies to set goals, measure performance, and report on their accomplishments.

      Focus of the panel presentation will be on the expectations inherent in the Act, initial experiences in implementation, and the potential policy implications. GPRA demonstrates Congress' determination to make agencies accountable for their performance. Central features of the act are: a strategic plan defining purpose and direction, the setting of annual performance targets, with performance indicators designated to monitor specific, measurable, achievable results. Focus for the latter is directed toward achieving program outcomes and customer/stakeholder satisfaction.

      The panel will note the links between GPRA and the annual budget process. While GPRA does not require a link between performance targets and specific budget amounts, it does require that Performance Plans express priority objectives, in a quantifiable and measurable form, and targeted levels of performance that programs expect to achieve by the end of the fiscal year. Return to top

      Topic: Scheduling Periodic Examinations for the Early Detection of Disease: Applications to Breast Cancer

      • Speaker: Dr. Sandra Lee, Harvard School of Public Health
      • Chair: Dr. Julie Legler, National Cancer Institute
      • Date/Time: Wednesday, December 4, 1996, TBA
      • Location: EPN - room TBA
      Abstract:

      We develop and extend earlier investigations on stochastic models for selecting examination schedules targeted at earlier diagnosis of chronic diseases. The general aim is to provide guidelines for public health programs in the choice of examination schedules. The main features of such schedules are: the initial age to begin a scheduled examination program and the intervals between subsequent examinations. We introduce two basic ideas: threshold method and schedule sensitivity method. These concepts either individually or together can lead to satisfactory examinations schedules. We illustrate the applicability of our methods to scheduling examinations for female breast cancer. Return to top

      Topic: Small-Biz Blarney

      • Speaker: David Hirschberg, Consulting economist
      • Discussant: William Dickens, The Brookings Institution
      • Chair: Fritz Scheuren, George Washington University
      • Day/Time: Wednesday, December 11, 1996, 12:30 - 2:00 p.m.
      • Location: BLS Cognitive Lab, Postal Square Building, Room 2990, 2 Massachusetts Avenue, NE, Washington, DC (Red Line -- Union Station). Enter at Massachusetts Avenue and North Capitol Street. Call Linda Atkinson (202-219-0934) to be placed on the visitors list.
      • Sponsor: Economics Section
      Abstract:

      This paper demolishes one of America"s great economic myths: that small businesses create all the jobs. To the contrary, employment data from the Census Bureau"s Standard Statistical Establishment List show that small businesses are not job creators. The myth is based on the regression fallacy.

      A copy of the paper can be found in SLATE, an online magazine, WWW.SLATE.COM/Hey, Wait a Minute/, go to end of current article, find previous Hey, Wait a Minute articles/96-10-17, Small-Biz Blarney.

      In addition, the Census Bureau"s data provide a matrix transition model that explains why the size distribution of employment by firm and establishment size has not changed in 50 years.< Return to top

      Topic: 100 Years of Meetings: A Celebration of the First American Statistical Association Meeting in Washington

      • Speakers:

        Rich Allen, National Agricultural Statistics Service
        "A New Look at Washington Statistical Society History"

        Ray Waller, American Statistical Association
        "Developments in the American Statistical Association in the Past 100 Years"

        Fritz Scheuren, George Washington University
        "100 Years of Developments in Training Government Statisticians"

      • Chair: Ed Goldfield, Committee on National Statistics
      • Date/Time: Tuesday, December 17, 1996, 3:30 - 5:00 p.m. before WSS Holiday Party
      • Location: BLS Conference and Training Center, Postal Square Building, Suite G440, Rooms 9 and 10, 2 Massachusetts Avenue, N.E., Washington, D.C. (Red Line -- Union Station). Enter at First Street - immediately across from the Metro. Take elevator to Ground Level. Please call Shawna McClain (703 - 235-5211, Extension 100) to be placed on the visitor"s list.
      • Sponsor: WSS Special Session
      Abstract:

      The first American Statistical Association meeting in Washington was held December 31, 1896. The remarks that evening were informal but prophetic as ASA President, Francis A Walker stated "…to express the hope and the expectation of the Association that this new departure of holding scientific meetings in this city will result in a very great advantage not only to the Washington members themselves…, but also to the Association as a whole…." The meeting was also significant because it was President Walker"s last public appearance since he fell ill on the trip and died a few days later.

      In this session, Rich Allen will take a different look at the Washington Statistical Society from the histories which have been published earlier. Ray Waller will examine the changes in the American Statistical Association since that first meeting in Washington. A major theme of President Walker's comments in 1896 was that the U.S. Government had not spent anything on the training and preparation of the staffs which prepared official statistics. Fritz Scheuren will examine that theme and give his comments on changes and improvements during the last century. Return to top

      Seminar Archives

      2024 2023
      2022 2021 2020 2019
      2018 2017 2016 2015
      2014 2013 2012 2011
      2010 2009 2008 2007
      2006 2005 2004 2003
      2002 2001 2000 1999
      1998 1997 1996 1995  

      Methodology