Industry Solutions Track

Redefining Student Success

Samuel Berestizhevsky and Tanya Kolosova

Creating successful learners presents a challenge for educational institutions. A series of knowledge exams are commonly used to assess student proficiency. However, the raw scores of these exams are often incorrectly analyzed. It leads to wrong inferences about students’ strengths and gaps and creates misleading recommendations on how to close these gaps. The reasons for incorrect analysis are often originating from the misuse of exam raw scores.
To address this problem, the authors developed a solution that uses Polytomous Rasch Measurement Model which correctly analyzes raw exam scores by simultaneously estimating difficulty of items and ability of students.
Identifying foundational exam items allows detecting areas of knowledge essential for further students’ success. The developed solution uses Relational Bayesian Networks to address this problem successfully.

The developed solution solely based on the SAS System. Also, the solution creates quantitative Student Success Profiles for each course, identifies gaps, suggests ways on how to improve students’ knowledge – and helps educators to lead students to succeed in our solution capabilities and presents a real-life case study.

View paper.

Validate the validated: A python script for study leads to review clinical outputs

Varaprasad Ilapogu, masaki mihaila, Janet Li and Ernesto Gutierrez

Datasets and statistical outputs produced by clinical SAS programming teams in the pharmaceutical industry are often validated individually by parallel programming prior to submission to other teams (e.g. Statistics and Medical Writing). This process leaves out the cross-checking of outputs against other outputs that may have similar information presented. For example, the population size of each treatment group in a study is presented in many of the outputs, yet there is often not a programmatic process in place to check whether the population sizes match across the different statistical outputs. We have developed a python script that addresses checking information across statistical outputs. The script extracts commonalities from each statistical output (e.g. individual tables and figures in RTF form) and presents the relevant information in a single, easily accessible document (e.g. an excel spreadsheet) to help facilitate the cross-checking of this information. We hope that this additional process can help enhance the quality and increase the efficiency of the study package review process.

View paper.

Stay Ahead of the Curve: How to implement New FDA Recommendation Study Data Standardization Plan (SDSP) in your organization?

Aakar Shah and Heather Riley

The Study Data Standardization Plan (SDSP) is a living document and assists FDA in identifying potential data standardization issues early in the development program. As the Food and Drug Administration (FDA) has already started asking some sponsors to provide a SDSP, it may not be that long until this new FDA recommendation becomes new FDA requirement. Initial recommendations have been provided through a PHUSE sponsored limited duration team. We have taken these recommendations and expanded the tools and process necessary to help the sponsor implement the SDSP within their organization We will go through step by step process to implement SDSP in your organization. This paper will focus on: SDSP background, Sponsor benefits to implement SDSP, Development of SDSP project charter, Stakeholder analysis, RACI, Overall SDSP process and timelines, Pilot project selection for SDSP implementation and Long-term implementation.

View paper.

Root Cause Analysis Using Pinnacle 21 Validation Reports: How to Improve End-to-End Programming Processshe

Aakar Shah and Tracy Sherman

As reviewers adjudicate Pinnacle 21 validation reports to address the quality of submission activities, further analysis can uncover underlying process-related issues which cause those findings in the first place. Even a single Pinnacle 21 validation report can identify underlying shortcomings. However, if you are a large pharmaceutical company or have access to a large number of Pinnacle 21 validation reports, it can help you analyze various trends and improve all aspects of the current programming process. In this paper, we will discuss how to analyze Pinnacle 21 validation reports, categorize findings in various buckets such as Data Collection, SDTM Process, ADaM Process, CRO Management, e-Submission Process, Internal Stakeholder Management, External Stakeholder Management, Training, and subsequently how to use root cause analysis to find and address end-to-end programming process improvement.

View paper.

CRO Oversight: Regulatory Submission Point of View

Aakar Shah and Tracy Sherman

As regulatory agencies become more stringent in accepting submissions and as Contract Research Organizations (CROs) work share increases in the Pharmaceutical industry, need for smart oversight of CROs become critical for a sponsor’s success. Regardless of the extent of CRO involvement, at the end of a day, the sponsor is responsible for submission compliance to regulatory agencies. Due to the lack of CRO oversight, delay in submission even by one day could cause loss of millions of dollars and more importantly patient’s lives can be affected. This paper will discuss various areas that are related to CRO oversight for successful regulatory submissions such as stakeholder management techniques, sponsor communication, use of sponsor metadata, contract update and sponsor’s submission focused review.

View paper.

Bespoke outputs, just like everyone else’s: using the SAS Macro Language to create template programs for standard presentations of your client’s clinical trial data

Sean-Paul Claypool

While the structure and content of the tables, listings, and figures (TLF) used to present clinical trial data are generally established, myriad trial designs and data types collected across studies present resource challenges to contract research organizations (CRO) to “re-invent the wheel” creating standardized outputs for each client’s specific data. Leveraging the SAS Macro Language, CROs can mitigate constrained budgets by developing template programs to produce common output types. Using adverse event and laboratory result summary tables as examples, this paper shall illustrate how such template programs may be designed with SAS Macro variables and simple Macro scripts to automate dataset manipulation and output report generation. With a robust library of flexible template programs, CROs can both maximize their resource utilization and deliver quality data products for their clients.

View paper.