Session List

Filter sessions by topic:

Please see below for the online workshops offered in 2024.

Managing Research Data


An overview of research data management requirements, practices, tools and support across the research data lifecycle. Research data or artefatcs are defined as items created, collected or observed in the course of producing original research, regardless of format. This introductory workshop is aimed at researchers, particularly those embarking on their research career or starting a new research project. Attendees will hear about policy, legal and ethical requirements, the FAIR, CARE and Maori Data Sovereignty principles, and develop strategies for data management planning, capturing, organising, sharing, and reusing research data.
Publications, Engagement & Impact
Jupyter notebooks for reproducible research


Jupyter Notebooks are a powerful interactive tool that can help you develop and practice your coding skills (especially in Python and Markdown), build reproducible and shareable outputs and easily present the results of your work. This session will discuss some of the pros and cons of using Jupyter Notebooks and give you a chance to follow along and make your first steps in using the tool yourself.
Data Science
NVivo for Literature Reviews


Reviewing the literature is an important part of the research process. Organising relevant papers and findings are more than just data entry or bibliographic tasks, you also need to be able to analyse and integrate this material with the qualitative data you are gathering. This one-hour demonstration will provide an overview of NVivo’s functionality with regard to literature reviews. Importing and coding literature, running queries on published material, and working with bibliographic data in conjunction with your NVivo project will all be covered. This workshop is recorded.
Data Collection & Cleaning
Introduction to Cleaning & Transforming Data with OpenRefine


OpenRefine is a powerful, free, open-source tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data. This introductory, practical workshop will demonstrate how it can help you to: Understand the structure of a data set and resolve inconsistencies; Split data up into more granular parts; Match local data up to other data sets; Enhance a data set with data from other sources.
Data Science
Data Collection & Cleaning
Research Data Collection & Surveys with REDCap: An Overview


REDCap is used by researchers to create surveys or databases to collect and track information and research data, and schedule study events. It is ideal for sensitive research data, including personally identifiable data and consent. It supports different levels of access for collaborators, including from multiple sites and institutions, and tracking of data entry and revision history. REDCap enables online and offline data collection, data sovereignty obligations, and export of data into common software. It is used across the Aotearoa research community, including Universities, CRIs and Te Whatu Ora. Join us to hear about and see a demonstration of how this tool can help your research.
Data Collection & Cleaning
How to Create a LaTeX Report Without Losing Hair


Go beyond the basics in this workshop. You will learn how LaTeX documents can be split up into styles, pages, appendices, etc. so that you can work in manageable chunks. This workshop will demonstrate several useful LaTeX packages and show you how to create documents with Overleaf. Prior knowledge in Bash and LaTeX is helpful, but not required. A template will be provided for participants.
Publications, Engagement & Impact
Open Access: How to Make Your Publications Open


Open Access to publications and other research outputs ensures you get maximum exposure and recognition for your work. Open publications are viewed, downloaded, and cited at higher rates than closed publications. There are free ways to make your work open regardless of where you publish, so you don't have to publish in OA journals or pay steep publication fees to enjoy the benefits of OA. In this one-hour workshop you'll learn how to make your publications open for free while respecting copyright and publisher agreements.
Publications, Engagement & Impact
Strategic Publishing: Deciding Where to Publish & Understanding the Process


Confused about where you should publish your research? Want to make sure you’re publishing in credible journals? Want to learn more about the publishing process? In this session we’ll cover publishing strategies to maximise the impact of your research and provide an overview of the publishing and peer review process so you know what to expect.
Publications, Engagement & Impact
Managing References With Zotero


Researchers spend a lot of time capturing, organising, and consulting sources, so it makes sense to use a good reference manager. Zotero is a free open-source reference manager built by researchers for researchers. It is simple to learn, yet powerful and feature-rich, and will save you countless hours when wrangling your sources. In this two hour workshop you'll learn how to use Zotero to capture, organise, and cite your references when you need them.
Publications, Engagement & Impact
An Introduction to Processing Remote Sensing Data With Google Earth Engine


Google Earth Engine (GEE) can be considered a one-stop-shop for your raster-based geospatial needs without the hassle of pre-processing. This practical workshop provides a comprehensive introduction to using GEE through the Javascript based code editor and aims to provide you with the necessary knowledge to leverage Earth Engine for you own geospatial research. This workshop will be delivered in 3 parts focused around accessing satellite imagery, performing analysis and image classification. This workshop is aimed at (but not limited to) novices using raster data in research, or for people who are interested in learning about a new tool to analyse geospatial data. Experience working with Javascript is beneficial but not necessary.
Data Science
Design 101: Presentations, Posters, and PowerPoints for Researchers


Why are you here? What are you presenting? Who are you presenting to? No, this is not the abstract for “Existentialism with Nietzsche”, it’s “Design 101: Presentations, Posters, and PowerPoints for Researchers”! Have you ever seen a research poster or a PowerPoint presentation that was truly terrible and thought, “Wow, I wonder how I could salvage that? I wonder how I can make research approachable through attractive design?” In this session, we will give you tips on what makes good visual design for research. We will walk you through the do’s (and some don’ts) and what to consider when putting together a visual research presentation, whether a poster, a PowerPoint slideshow, or another type of medium.
Publications, Engagement & Impact
Introduction to R and RStudio


R is a free and widely used programming language for data analysis and statistics. This workshop aims to introduce you to the R programming language, and RStudio - free software used to work with R. We will cover the most important parts of starting with R including setting up your project in R, basic programming principles, reading in data, summarising and subsetting data, and creating simple but beautiful plots. This workshop is aimed at those who are new to R and programming, and covers working with data and basic plotting. For statistical analyses with R, please see Hands-on Statistical Analysis with R.
Data Science
Introduction to the Command Line


The Unix shell is a powerful tool that allows users to perform complex tasks, such as making a series of changes to a large number of images or text files, often with just a few keystrokes or lines of code. It helps users automate repetitive tasks and easily combine smaller tasks into larger, more powerful workflows. Use of the command line is often required to interact with High Performance Computing services such as those offered by NeSI, and it is used extensively in the the data manipulation workflows in some disciplines. This workshop will introduce you to this powerful tool so that you can apply it to your research work. The target audience is learners who have little to no prior computational experience, and the instructors prioritise creating a friendly environment to build confidence in research computing. Even those with some experience will benefit, as the goal is to create automated and reproducible workflows. For instance, after attending this workshop you will be able to navigate the filesystem, manipulate files, change the behaviour of commands with options and arguments, write scripts to perform actions on many files at once.
Research Computing
Data Science
What is the Julia Programming Language and is it Right for Me?


Julia is a relatively new but exciting, multi-purpose programming language, with increasing adoption among scientific researchers. Interaction with Julia closely resembles that of scripting languages, such as R, MATLAB and python, and a growing number of Julia libraries provide similar functionality for scientific computation. However, extending modifying, or creating new software in these older languages is complicated as all performance critical code must be written in a second, low-level language, like C or FORTRAN, which are more technically demanding, and are slower to test, debug, etc. Julia's careful and elegant design solves this two language problem. In this presentation and Q&A session, find out if Julia is a good match for your research project.
Research Computing
Data Science
Getting Started With the Julia Programming Language


A two-hour introductory workshop for newcomers to Julia, targeted at users from some technical domain, such as science, economics or engineering. The primary focus will be helping users interact with Julia through its powerful command-line interface (REPL) as well as through Pluto notebooks, although other options will be discussed. Further topics will be selected from the following, according to time and participant interest: Carrying out basic mathematical and statistical operations; Creating custom workflows using functions and basic iteration; Performing basic data manipulation and visualization; Suggestions for self-study.
Research Computing
Data Science
NVivo Showcase


There's a lot more to NVivo than initially meets the eye! In this webinar we'll be showcasing our favourite features of NVivo including matrix coding queries, explore and comparison diagrams, and mind-maps. This session is perfect for researchers who are new to NVivo, as well as those who are familiar with the basics and curious to know what else is possible. This workshop is recorded.
Data Science
Data Collection & Cleaning
Research Computing with the Rust Programming Language


What is Rust, and how might you use it for research? Tim McNamara, author of Rust in Action, will give a quick primer on the Rust programming language, and explain how it might be used to speed up, and scale up, your research. No prior experience with the language is necessary. Please note this session is recorded.
Data Science
Python for Image Manipulation and Repeatable Research Pipelines


You know how to crop an image, but what if you need to crop 65000 images in one go? This applied tutorial will introduce how the Python programming language can be used to create powerful, scalable and repeatable workflows, using image manipulation as an example. The session will include a live demo with commentary, project showcase and questions and answers. Having an entry level understanding of Python or a similar programming language will be helpful, but not essential.
Research Computing
Data Science
Introduction to High Performance Computing with NeSI


A hands-on introduction to high performance computing (HPC) on a NeSI supercomputer. Members of the NeSI training team will guide attendees through HPC fundamentals including, software environment modules, scheduler use, profiling and scaling. We recommend you attend 'Introduction to the Command Line' or are already familiar with navigating a command line linux environment. Requirements: NeSI account, details provided after registration and closer to the event.
Research Computing
Keeping Your Spreadsheets Tidy


Good data organisation is the foundation of any research project. We often organise data in spreadsheets in ways that we as humans want to work with it, but computers require data to be organised in particular ways. This workshop introduces 'tidy data' principles - a set of recommendations for keeping your projects and spreadsheets clean and organised. This is especially important if you're planning to analyse your data with a programming language like R or Python, otherwise you might need to spend hours and hours tidying up or reformatting your data before you can start your analyses.
Data Science
Is your computer struggling to analyse your research data?


Does your computer take hours to run an analysis or produce visualisation? Would it help to process data or run analysis elsewhere and keep your computer for day-to-day work? Do you need to collaborate to process or analyse research data? Join us to hear about different research computational options. We'll discuss what virtual machines and high performance computers are, how these can be accessed and how they are useful to researchers.
Research Computing
Introduction to Qualtrics for Research Surveys


This session will introduce you to the Qualtrics survey tool, with a particular focus on how it can be used to help your research. Qualtrics is an easy to use, yet powerful tool that allows you to create and distribute fully customisable surveys for a broad range of purposes. The session will traduce you to setting up a survey, options for distribution, and options for analysing your data.
Data Collection & Cleaning
How to Plan Your Research for Real World Impact


Economic, societal and environmental impact, or the 'non-academic' impact of research, is becoming an increasingly important part of the research ecosystem. It is standard practice for researchers to be asked by funders to describe the benefits of their research and how they might enable that benefit to be achieved. This session offers a high-level step-by-step guide on how to incorporate impact into your research planning.
Publications, Engagement & Impact
Digital Storytelling with KnightLab


Interested in learning how to use a suite of open-source tools to create interactive narratives and visualisations for your research? This session provides an overview of a range of free, easy-to-use tools from KnightLab useful for time or location-based narratives. Learn the basics and see how easy and fun it is to create a compelling StoryMap.
Publications, Engagement & Impact
ResBaz Drop-In Clinic (HackyHour)


Open drop-in session to help with troubleshooting, getting help with installing session requirements, and any ResBaz questions. No registration required, just join via Zoom when the session starts.
Research Computing
Data Science
Visual Abstracts Create an Attention Hook to Your Published Article


Visual abstracts are a 'movie poster' of a journal article displayed on social media that hooks a viewer's attention to read your article. Like a 3-minute thesis is a verbal elevator pitch, a visual abstract is a pictorial summary understood in a 30-second glance. Designed with icons and keywords, they are simpler than a graphical abstract and quicker to make. Visual abstracts are a powerful thinking tool for yourself and a valuable communication tool to engage others. The first half of the session is an interactive exploration of visual abstracts to inspire your imagination. The second half is a guided workshop where together we build your creative confidence by making a visual abstract.
Publications, Engagement & Impact
What is NeSI? New Zealand's National High Performance Computers


The computational requirements of high impact research seems to grow further beyond what individual groups and institutions can reasonably provide every year. New Zealand eScience Infrastructure (NeSI) seeks to help meet these requirements for the New Zealand research community. In this talk we will go over the core services relating to high performance computing (HPC) that NeSI is able to provide, the reason why HPC might be suitable for your work, and how your research team can gain access to these resources.
Research Computing
Data Science
Research Collaboration and Reproducibility with Google Colab


Finding it challenging to collaborate with other researchers? Do you want to make your research as accessible and reproducible as possible? Google Colab is a hosted Jupyter notebook service that allows anybody to write and execute python code through the browser, while providing access free of charge to computing resources including GPUs. With a robust free tier, no installation or prerequisites, and a tonne of features, Google Colab can undoubtedly help you. This one-hour introductory workshop will demonstrate the most important features of Google Colab. Some UoA-specific topics will also be covered, such as how to mount your Google Drive or Dropbox so you can utilise your datasets and have your results saved automatically. This workshop's final section will showcase examples of how Google Colab is being used for research and education.
Research Computing
Data Science
Doing Even More with OpenRefine


You know how to use Open Refine to clean up messy data using facets and clustering, and you’re curious about some of its powerful features This demonstration of using transformations will include how to use snippets of reusable code to do things like select part of a sentence, swap author first and last names around, and use Python and Regex in OR. We will also look at using OR to query web based APIs to enrich a dataset.
Data Science
Data Collection & Cleaning
Tikanga, Māori Research Ethics and Māori Data Sovereignty


All researchers are welcome to listen and contribute to this korero (discussion) about Tikanga, Māori Research Ethics and Māori Data Sovereignty within the context of undertaking research in Aotearoa. Our presenters will provide an introduction to the concepts and issues and why they are important to researchers. Attendees will be invited to respectfully ask questions and share their successful approaches to weaving Tikanga and Māori Data Sovereignty principles into how they conduct research.
Data Collection & Cleaning
Authoring Collaborative Research Projects in Quarto


Learn how to develop reproducible and sharable workflows using Quarto and Git. We will take you through how to host collaborative research projects as a formatted (and cool looking) HTML on GitHub. With some easy-to-learn version control and markdown syntax, research outputs can be shared as a live link that is consitent with your latest analyses. One benefit of Quarto is flexibility, accepting multiple programming languages (e.g. R, Python, Julia...) and output formats (e.g. docx, pdf, html...).
Publications, Engagement & Impact
Hands-on Statistical Analysis with R


A hands-on workshop where we will explore some of the built-in demo data sets available in R. We will apply commonly used statistical analyses such as linear regression, independent-sample t-tests and chi-squared tests, discussing the output and how we might present the results for publication. This workshop is aimed at those who are wanting to learn how to do statistical tests in R. If you’d like to follow along, you need some basic R knowledge already. For an introduction to R for absolute beginners please see Introduction to R & RStudio.
Data Science
Introduction to the Python Programming Language


Python is a high-level general purpose programming language that is popular for working with research data owing to an active developer base and wide range of packages that can be leveraged for research. This comprehensive hands-on session will cover the fundamental building blocks of working with Python to analyse and visualize data. Together we'll interactively learn how to use Python to generate a plot from a csv file, getting to grips with the core functionality of the language along the way.
Data Science
Latex 101: An Introduction to Formatting Documents With Code


This session will introduce attendees to LaTeX, a popular typesetting system and programming language used to create professional-looking documents. We will cover the basics of LaTeX, explore how it differs from Microsoft Word, and equip learners with a solid foundation to build upon in later LaTeX sessions. Although there is an initial learning curve, time invested in learning LaTeX will pay off in the long term by giving you a reliable way to professionally format your documents.
Publications, Engagement & Impact
Collaborating with Dropbox: Tips and Tricks


Led by the experts from Dropbox, this seminar is designed to help you make use of key features of this collaborative platform. Hosts will provide practical insights into how to effectively utilise Dropbox to improve collaboration, including how to manage files, share documents, and create teams, especially for those with institutional accounts (e.g. University of Auckland, University of Otago). Attendees will also learn about the latest features, including Transfer, Paper, Backup and how to use them to streamline workflows.
Publications, Engagement & Impact
Researcher Skills And Career Planning For Academia And Beyond


Learn about the different capabilities and skills needed for a successful career in academia, tips for planning your career in academia, and what transferable skills academics possess which are relevant outside of academia.
Publications, Engagement & Impact
Getting data from the web: An introduction to webscraping and APIs


There are all kinds of useful sources of research data on the web, such as tables on webpages, or records available only via APIs (Application Programming Interfaces). Extracting or assembling this data into a useable form by hand is often prohibitively time consuming or difficult. However, if you know how to scrape data or use an API it opens up all sorts of interesting data sources for you to include in your research. In this practical, follow-along workshop, we will start by introducing some important considerations when working with data from the web (ethics, terms of use, copyright). We’ll then explore how data can be scraped from webpages, retrieved from APIs, and processed into a tabular form ready for analysis. Participants are expected to have a novice-level understanding of Python.
Data Science
How to present data & results on a self-updating website, using GitHub Actions


This workshop describes a workflow for creating a self-updating website, hosted on GitHub Pages and powered by GitHub Actions, both of which are free services (with some limitations). These kinds of websites allow you to build up a dataset over time and display that data as a dynamic ‘dashboard’ which automatically updates with new data. Participants will benefit from a basic understanding of web scraping, either by attending "Getting data from the web", or through prior knowledge. This 2 hour taster session will introduce and link important concepts, offer a chance to ask questions, and provide participants with useful resources, but it will not be a hands-on session where participants can follow along.
Data Science
Hands-on introduction to leveraging containers in research code


Have you ever found yourself with a potpourri of Python versions lingering in your laptop, conflicting between one another or drowning you in dependency hell? Have you ever found yourself trying to use custom software environments from a colleague or the internet, only to find it more difficult to install, than it is to run? Or, have to try to share your code with lab mates, and they just cannot get it to work without you? If you ever find yourself running into reproducibility issues in those scenarios, or even with your own software some years later, this session is for you.This hand-ons workshop will introduce you to containers, using Docker, to help you create, maintain, share and consume code for your research in a manner that promotes reproducibility and provenance. And hopefully alleviate some of those pain points with research software.A minimum knowledge of the command line, i.e. launching commands, listing and editing files or changing directories is required.
Research Computing
Introduction to version control with Git


Are you working with code? Do you wish there was a neater way to keep an old copy of your code around, in case you still needed them? Do you need to collaborate with your colleagues? This workshop is for you! We will introduce Git, a version control system, for tracking changes on your local machine. We will also briefly touch on how to use GitHub as a remote repository. Git keeps track of changes to code and free us from the burden of keeping multiple files with increasingly long and complex filenames. Even though version control systems originated in the world of software development, they're just as useful when working with research projects. You can also connect to a remote repository like GitHub, which allows you to keep a backup of your code and its history, sync across your devices, and have powerful features for collaborating with your colleagues. If you're planning to write any kind of code during your research, it's highly recommended you understand and use version control systems like Git and remote repositories like GitHub to improve the way you work and collaborate (and to make it more enjoyable). This is a beginner-friendly workshop - participants will benefit from having some basic experience with a command-line, but this isn't required.
Research Computing
Joining the dots for modern data science workflows


Modern research often makes use of programming and other data science tools, but it can be confusing trying to assemble all the pieces and understand how different tools are used together. This workshop uses the example of Python code to plot geospatial data with Geopandas and, along the way, introduces a variety of other tools that are commonly used to help with programming and digital workflows. We'll explain how and why to use a specialised code editor (Visual Studio Code), how to leverage version control with Git and use remote repositories on GitHub, how to use GitHub Actions, and how to incorporate generative AI tools like GitHub Copilot to facilitate writing and augmenting your code. This workshop aims to provide a map of these different tools to show how they can be used together to support reproducibility and openness in the research process. Some prior experience with programming will be beneficial, but this isn't required to attend, as this is not a hands-on code-along session.
Data Science
Navigating New Zealand’s Trusted-Research Protective Security Requirements (TR-PSR)


The session will breakdown the national TR-PSR policy framework into its key aspects and how this aims to prevent foreign interference and espionage within universities. The session is aimed at academic researchers and professional support staff and is intended to provide them with an understanding of how TR-PSR is being operationalised in a university setting. Attendees will be provided with specific references to available and upcoming resources (e.g., Universities New Zealand training modules).
Cybersecurity in Research
Nectar Research Cloud services at the University of Auckland


Do you need more compute power for your analysis? Or somewhere to test your new workflow before processing your research data? Researchers and doctoral candidates from The University of Auckland can access our cloud computing platform free of charge. The Nectar research cloud offers several cloud computing services, from self-service customisable virtual machines to 'off-the-shelf' virtual desktops accessible through a web browser. This session will provide a comprehensive summary of the services available on Nectar and how to request them.
Research Computing
An introduction to cloud security for researchers


Are you following security best practices? This is a tricky question and can vary widely depending on the technology you are using for your research. Additionally, security can be exceedingly difficult to implement effectively, while maintaining a balance between the level of security needed, and the effort required to implement and manage it. This talk will present security basics accompanied by some technical solutions researchers can implement to secure their cloud computing tools. We will introduce security controls and provide technical ways to implement these controls. For example, the use of network segmentation to secure systems, which can be accomplished in Nectar using private networks and security groups. Our primary cloud platform for demonstrations will be the Nectar Research Cloud - however, the techniques discussed will be transferable to other cloud platforms, such as AWS.
Cybersecurity in Research
Using digital tools for transcription


Transcribing audio into text is part of the research process for many researchers. Manually transcribing text can be time consuming, so an increasing number of researchers are using software to transcribe, for example, interview or focus group audio recordings. Join us to hear about various transcription tools, including a demonstration of OpenAI's Whisper - an automatic speech recognition system trained on a Large (multi-lingual) Language Model that can be run on a local computer or restricted access virtiual machine
Data Collection & Cleaning
Data Management Planning


Data Management Plans are a useful way of mapping out the collection, storage, analysis, and publication of research data. They surface important institutional or funder requirements, and ensure that project members are aware of their ethical and legal responsibilities when working with project data. This session will provide an overview of how Data Management Plans are a useful tool for researchers at all stages of their work, and in particular, when revisiting research data over time or onboarding new project members.
Data Collection & Cleaning