It takes a lot of people to make a winning team. Everybody's contribution is important.
Continuing Professional Development (CPD) Workshops
The aim of the workshops is to provide clients with basic statistical tools for inputting, describing and analysing data. The workshops are predominantly computer based using software packages such as Minitab, MS Excel, NVivo, R and SPSS. The workshops are designed for clients with no prior knowledge of statistics and mathematical theory is kept to a minimum. The workshops can be adapted to suit the requirements of the client. Requests for other workshops may be made to the sean.lacey@cit.ie at any time.
Non-laboratory based
1. Statistics for Research
Statistics fills the crucial gap between information and knowledge. Society cannot be run effectively based on hunches or trial and error. In research much depends on the correct analysis of numerical information. Decisions based on data will provide better results than those based on intuition or gut feelings.
One of the major problems any researcher faces is reducing complex situations to manageable formats to describe, explain or model them. This is where statistics comes in.
This workshop highlights which statistics to use, why to use those statistics, and when to use them. Through using appropriate statistics, you will be able to make sense of the data you have collected so that you can tell your research story coherently and with justification.
2. Research Questionnaire Design and Microsoft Forms
This course introduces the basic elements of research questionnaire design. The outline of the course is as follows:
- Steps involved in designing a successful questionnaire;
- Qualities of good questions;
- Questionnaire length and wording;
- Discussion on how questionnaires can be analysed using various statistical packages – e.g., SPSS, NVivo, etc.;
- Introduction to using Microsoft Forms for questionnaire design.
R and SPSS laboratory based
3. Introduction to Data Analysis
This course provides an introduction to R and/or SPSS. The outline of the course is as follows:
- A basic introduction to statistics. It is important before using R/SPSS that some basic statistical terminology is understood (terminology will be kept to a minimum);
- Demonstration on how to set up a R/SPSS file;
- Conclude with compiling some descriptive statistics (both numerical and graphical) for various data types.
4. Statistical Inference
This course provides a look into the world of hypothesis testing with R and/or SPSS. Basic knowledge of R/SPSS is assumed. The outline of the course is as follows:
- Explanation to the idea behind hypothesis testing (theory kept to a minimum);
- Worked examples of testing for normality/differences/relationships with various types of data;
- How to report on statistical results.
5. Multi-Variable Data Analysis
This course will build on Statistical Inference. The course will focus on:
- How to line up and analyse various data sets properly in both a parametric and non-parametric way;
- In the case of non-parametric data, the course will look at suitable transformations to use prior to the use of parametric tests;
- One-way Analysis of Variance (ANOVA) with suitable posthoc testing;
- Between subjects factorial experiments;
- Within subjects factorial experiments;
- Mixed factorial experiments.
6. Multiple Regression
Correlation analysis is used to find out if there is a statistically significant relationship between TWO variables. Linear regression analysis is used to make predictions based on the relationship that exists between two variables. However, most things are too complicated to “model” them with just two variables. Multiple Regression is needed. Multiple regression is a statistical technique that allows us to predict a score on one variable based on the scores on several other variables. This course covers:
- Scatterplots and partial regression plots;
- Correlation and regression analysis;
- Test for homoscedasticity;
- How detect for multicollinearity and outliers;
- How to check that the residuals (errors) are approximately normally distributed;
- How to interpret regression equations and how to use them to make predictions.
7. Time Series Analysis
This course will provide the necessary tools to develop and critically evaluate time series models. The forecasting function of these models is presented and evaluated, enabling the ability to create short and medium term forecasting models. A basic understanding of time series analysis is assumed. The following topics will be introduced:
- Decomposition (trend, periodicity, seasonality, white noise);
- Smoothing techniques;
- Autoregressive (AR), Moving Average (MA) and mixed (ARMA) models;
- Forecast Error and Confidence Intervals.
8. Multivariate Data Analysis
Exploratory Factor Analysis (EFA) is a statistical approach for determining the correlation among the variables in a dataset. This type of analysis provides a factor structure (a grouping of variables based on strong correlations). In general, an EFA prepares the variables to be used for cleaner structural equation modeling. An EFA should always be conducted for new datasets. The beauty of an EFA over a CFA (confirmatory) is that no a priori theory about which items belong to which constructs is applied. This means the EFA will be able to spot problematic variables. This workshop will focus on:
- Data screening;
- Factoring methods;
- Appropriateness of data;
- Communalties;
- Factor structure;
- Convergent and discriminant validity;
- Reliability;
- Common EFA problems.
Minitab laboratory based
9. Introduction to Data Analysis
Part I of this workshop provides an introduction to Minitab. The outline is as follows:
- A basic introduction to statistics. It is important before using Minitab that some basic statistical terminology is understood (terminology will be kept to a minimum);
- Demonstration on how to set up a Minitab file;
- Conclude with compiling some descriptive statistics (both numerical and graphical) for various data types.
Part II of the workshop provides a look into the world of hypothesis testing with Minitab. The outline is as follows:
- Explanation to the idea behind hypothesis testing (theory kept to a minimum);
- Worked examples of testing for normality/differences/relationships with various types of data;
- How to report on statistical results.
10. ANOVA and Design of Experiments (DOE)
Part I of this workshop will look at analysing data using Analysis of Variance:
- Completely randomised designs;
- Randomised block designs;
- Factorial designs.
Part II will introduce Design of Experiments (DOE). DOE help investigate the effects of input variables (factors) on an output variable (response) at the same time. These experiments consist of a series of runs, or tests, in which purposeful changes are made to the input variables. Data are collected at each run. DOE is used to identify the process conditions and product components that affect quality, and then determine the factor settings that optimise results. Part II of the workshop covers:
- DOE Language and Concepts;
- Creating a factorial design;
- View a design and enter data in the worksheet;
- Analyse a design and interpret the results;
- Use a stored model to create factorial plots and predict a response.
11. Process Capability Analysis
A process capability analysis is performed to determine if a process is statistically capable. Based on the results of the capability study, an estimate to the number of defective components the process would produce can be determined.
- Run and Pareto chart;
- Variables control charts: XBar, R, S, XBar-R, XBar-S, I, MR, I-MR, I-MR-R/S, zone, Z-MR;
- Historical/shift-in-process charts;
- Box-Cox and Johnson transformations;
- Process capability for multiple variables;
- Tolerance intervals;
- Acceptance sampling and OC curves.
12. Correlation, Regression and Multivariate Analysis
Correlation analysis is used to find out if there is a statistically significant relationship between TWO variables. Linear regression analysis is used to make predictions based on the relationship that exists between two variables. However, most things are too complicated to “model” them with just two variables. Multiple Regression is needed. Multiple regression is a statistical technique that allows us to predict a score on one variable based on the scores on several other variables. This course covers:
- Scatterplots and partial regression plots;
- Correlation and regression analysis;
- Test for homoscedasticity;
- How detect for multicollinearity and outliers;
- How to check that the residuals (errors) are approximately normally distributed;
- How to interpret regression equations and how to use them to make predictions;
- How to do use Principal Component Analysis to deal with multicollinearity.
Microsoft Excel laboratory based
13. Excel for Business Analysis
MS Excel is a package that is used quite extensively in business and banking. This course will:
- Lay foundations on financial modeling and business analysis;
- Explore various features of Excel that help in business analysis & modeling;
- Explain uses of Excel in various modeling & analysis scenarios.
- Use various inbuilt Excel functions – e.g., VLOOKUP, IF, COUNTIF, ISNA, SUMPRODUCT, HLOOKUP, CONCATENATE, etc.;
- Demonstrate how to record Macros.
14. Introduction to Data Analysis in MS Excel
The outline of the workshop is:
- A basic introduction to statistics. It is important before using MS Excel for data analysis that some basic statistical terminology is understood;
- Outlining the appropriate descriptive statistics (both numerical and graphical) depending on the type of data;
- Conclude with compiling reliable descriptive statistics using PivotTable, PivotChart and the Data Analysis package.
15. Introduction to Statistical Inference in MS Excel
This workshop provides an introduction to the world of hypothesis testing using MS Excel with a focus on qualitative/categorical data. The outline of the workshop is as follows:
- A basic introduction to statistics and hypothesis testing. It is important before using Excel for data analysis, that some basic statistical terminology is understood;
- Compiling descriptive statistics (both numerical and graphical) with a focus on qualitative/categorical data;
- Practical examples of testing for relationships across one and two categorical variables;
- Communicate effectively statistical results in a clear concise manner.
NVivo laboratory based
16. Introduction to NVivo
This workshop is a basic introduction to NVivo and will provide information and practice to get started with a project. Using sample data, the workshop will cover:
- Creating a new project and setting up its structure in NVivo;
- Sources: Reflecting on data;
- Coding;
- Exploring data: Lexical queries;
- Cases and classifications;
- Exploring data: Coding queries and visualisations.
17. Qualitative Data Analysis using NVivo
This workshop builds upon Introduction to NVivo. The workshop will focus on other kinds of sources, with a focus on survey data (datasets). In addition, the workshop will look at the three types of mapping tools available, as well as more ways of working with queries including the matrix coding query. The workshop will cover:
- Reviewing ways of writing;
- Exploring patterns in the data;
- Revisiting the coding query;
- Maps.