Session List

Filter sessions by topic:

See below for the workshops on offer in 2026.

Managing Research Data


An overview of research data management requirements, practices, tools and support across the research data lifecycle. Research data or artefatcs are defined as items created, collected or observed in the course of producing original research, regardless of format. This introductory workshop is aimed at researchers, particularly those embarking on their research career or starting a new research project. Attendees will hear about policy, legal and ethical requirements, the FAIR, CARE and Maori Data Sovereignty principles, and develop strategies for data management planning, capturing, organising, sharing, and reusing research data.
Data Science
Publications, Engagement & Impact
Nvivo Showcase


There's a lot more to NVivo than initially meets the eye! In this webinar we'll be showcasing our favourite features of NVivo including matrix coding queries, explore and comparison diagrams, and mind-maps. This session is perfect for researchers who are new to NVivo, as well as those who are familiar with the basics and curious to know what else is possible.This workshop is recorded.
Qualitative Methods
Nvivo For Literature Reviews


Reviewing the literature is an important part of the research process. Organising relevant papers and findings are more than just data entry or bibliographic tasks, you also need to be able to analyse and integrate this material with the qualitative data you are gathering. This one-hour demonstration will provide an overview of NVivo’s functionality with regard to literature reviews. Importing and coding literature, running queries on published material, and working with bibliographic data in conjunction with your NVivo project will all be covered. This workshop is recorded.
Qualitative Methods
Introduction To Cleaning & Transforming Data With Openrefine


OpenRefine is a powerful, free, open-source tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data. This introductory, practical workshop will demonstrate how it can help you to: Understand the structure of a data set and resolve inconsistencies; Split data up into more granular parts; Match local data up to other data sets; Enhance a data set with data from other sources. Setup instructions can be found here
Data Science
Qualitative Methods
Research Data Collection & Surveys With REDCap


REDCap is used by researchers to create surveys or databases to collect and track information and research data, and schedule study events. It is ideal for sensitive research data, including personally identifiable data and consent. It supports different levels of access for collaborators, including from multiple sites and institutions, and tracking of data entry and revision history. REDCap enables online and offline data collection, data sovereignty obligations, and export of data into common software. It is used across the Aotearoa research community, including Universities, CRIs and Te Whatu Ora. Join us to hear about and see a demonstration of how this tool can help your research.
Qualitative Methods
Capturing Longitudinal Data in REDCap


This talk will cover the basics how of longitudinal or repeated data may be captured using a variety of tools within REDCap. We’ll start with an introduction of the underlying data structures, then cover the use of events and repeating forms, and how to export the data in a way that simplifies the analysis to be undertaken.
Data Science
Qualitative Methods
AI Tools For Literature Reviews


Literature reviews are an important part of setting the scene for your research, but it can be time-consuming to find and evaluate large numbers of sources. With the explosion of interest surrounding all things AI, many researchers are wondering how generative AI tools might be applied to reviewing the literature. In this session, we will examine how generative AI tools can be used at different stages in the literature review process. We will explore the capabilities and limitations of some readily available AI tools and discuss how they can support your searches. Finally, we’ll review some important considerations when choosing to use them in the research process.
Data Science
Artifical Intelligence
Design 101: Presentations, Posters, And Powerpoints For Researchers


Why are you here? What are you presenting? Who are you presenting to? No, this is not the abstract for “Existentialism with Nietzsche”, it’s “Design 101: Presentations, Posters, and PowerPoints for Researchers”! Have you ever seen a research poster or a PowerPoint presentation that was truly terrible and thought, “Wow, I wonder how I could salvage that? I wonder how I can make research approachable through attractive design?” In this session, we will give you tips on what makes good visual design for research. We will walk you through the do’s (and some don’ts) and what to consider when putting together a visual research presentation, whether a poster, a PowerPoint slideshow, or another type of medium.
How To Plan Your Research For Real World Impact


Economic, societal and environmental impact, or the 'non-academic' impact of research, is becoming an increasingly important part of the research ecosystem. It is standard practice for researchers to be asked by funders to describe the benefits of their research and how they might enable that benefit to be achieved. This session offers a high-level step-by-step guide on how to incorporate impact into your research planning.
Publications, Engagement & Impact
Researcher Skills And Career Planning For Academia And Beyond


Learn about the different capabilities and skills needed for a successful career in academia, tips for planning your career in academia, and what transferable skills academics possess which are relevant outside of academia.
Publications, Engagement & Impact
How to Peer Review


Peer review is a cornerstone of modern academic publishing practices, but researchers seldom receive any formal training on how to actually do it. This workshop provides an introduction to essential peer review skills, and a template to guide you on your way to producing useful peer reviews. It is aimed at published researchers who have started to receive review requests from journals or colleagues. Attendees will learn about the general peer review process, and how to write fair, constructive, and actionable reviews of others' work. Improving your peer review skills will also improve your own writing skills, and help you to think about your own work from the perspective of a peer reviewer.
Publications, Engagement & Impact
Publish Smarter: Choosing Where to Publish


This session offers practical guidance for researchers on how to publish strategically and purposefully. Learn how to assess journals for quality and fit, avoid predatory publishers and understand key indicators like peer review, indexing and journal metrics. We’ll share tips for publishing in top-tier journals such as Nature and Science, and explore purpose-driven strategies including Open Access, Indigenous publishing and broader outputs like policy briefs and preprints. Whether you're aiming to advance your career, share knowledge with communities or contribute to wider conversations, this session will help you choose the right publishing pathway.
Publications, Engagement & Impact
Introduction to R for Data Analysis


Programming skills are becoming more and more important for researchers to effectively collect, analyse, and translate data into publication-ready insights and figures. But it can be difficult to 'take the plunge' and learn those first steps. This practical, follow-along workshop is aimed at those who are completely new to programming, and introduces the R programming language for data analysis. Participants will learn about the most important concepts for starting with R, including setting up a project in R, basic programming principles, reading in data, summarising and subsetting data, and creating basic plots. We aim to give participants a brief overview of what is possible with R and to inspire them to continue learning. Participants will be expected to follow along and will be provided with set up instructions in advance, which must be completed before the workshop.
Data Science
Exploring REANNZ HPC: Tools and Services for Researchers


The computational demands of high impact research continue to outpace what individual groups or institutions can realistically support. REANNZ’s Mahuika high performance computing (HPC) platform is designed to help bridge this gap for researchers across Aotearoa. In this talk, we’ll explore the core HPC services REANNZ provides, discuss why HPC might be the right fit for your research, and outline the steps your team can take to gain access to these powerful resources.
Research Computing
High Performance Computing
Introduction to High Performance Computing


A hands-on introduction to high performance computing (HPC) on a REANNZ supercomputer. Members of the REANNZ training team will guide attendees through HPC fundamentals including software environment modules, scheduler use, profiling, and scaling. Requirements: REANNZ HPC account, details provided after registration and closer to the event. We recommend you attend 'Introduction to the Command Line' or are already familiar with navigating a command line Linux environment
High Performance Computing
Journeys into High Performance Computing


Hear from people who had no prior knowledge or background working in High Performance Computing (HPC) yet today are in roles involving digital approaches to research. Panellists will share stories of what it was like to learn things from scratch, what tips & tricks worked for them, and how they overcame challenges to get to where they are today. Facilitated by the Women in HPC Australasian Chapter (https://tinyurl.com/whpcaunz), this session highlights diverse voices and champions the message that “anyone can be successful in this space”.
Research Computing
High Performance Computing
Digital Storytelling with KnightLab


Interested in learning how to use a suite of open-source tools to create interactive narratives and visualisations for your research? This session provides an overview of a range of free, easy-to-use tools from KnightLab useful for time or location-based narratives. Learn the basics and see how easy and fun it is to create a compelling StoryMap.
Publications, Engagement & Impact
Resbaz Drop-In Clinic (Hackyhour)


Open drop-in session to help with troubleshooting, getting help with installing session requirements, and any ResBaz questions. No registration required, just join via Zoom when the session starts.
Research Computing
Data Science
Visual Abstracts Create An Attention Hook To Your Published Article


Visual abstracts are a 'movie poster' of a journal article displayed on social media that hooks a viewer's attention to read your article. Like a 3-minute thesis is a verbal elevator pitch, a visual abstract is a pictorial summary understood in a 30-second glance. Designed with icons and keywords, they are simpler than a graphical abstract and quicker to make. Visual abstracts are a powerful thinking tool for yourself and a valuable communication tool to engage others. The first half of the session is an interactive exploration of visual abstracts to inspire your imagination. The second half of the session is an introductory level guided workshop to create a basic VA.
Publications, Engagement & Impact
Research Collaboration And Reproducibility With Google Colab


Finding it challenging to collaborate with other researchers? Do you want to make your research as accessible and reproducible as possible? Google Colab is a hosted Jupyter notebook service that allows anybody to write and execute python code through the browser, while providing access free of charge to computing resources including GPUs. With a robust free tier, no installation or prerequisites, and a tonne of features, Google Colab can undoubtedly help you. This one-hour introductory workshop will demonstrate the most important features of Google Colab. Some UoA-specific topics will also be covered, such as how to mount your Google Drive or Dropbox so you can utilise your datasets and have your results saved automatically. This workshop's final section will showcase examples of how Google Colab is being used for research and education.
Data Science
Hands-On Statistical Analysis With R


Many researchers approach statistical analyses with trepidation because they’re unsure about which analyses are appropriate for their data. This hands-on workshop introduces some important background statistical concepts, provides a simple workflow for deciding on which analyses are appropriate based on the kinds of variables you’re working with, and then demonstrates how to conduct these analyses in R. We will apply commonly used statistical analyses such as linear regression, independent-sample t-tests, and chi-squared tests, to R’s built-in datasets. We will discuss the output and how we might present the results for publication. This workshop is aimed at attendees who already understand the basics of working with the R programming language, and who want to learn how to perform statistical tests in R. For an introduction to R for absolute beginners please see ‘Introduction to R & RStudio’ instead.
Data Science
Publications, Engagement & Impact
Introduction to Programming With Python


Python is a high-level general purpose programming language that is popular for working with research data owing to an active developer base and wide range of packages that can be leveraged for research. This comprehensive hands-on session will cover the fundamental building blocks of working with Python to analyse and visualize data. Together we'll interactively learn how to use Python to generate a plot from a csv file, getting to grips with the core functionality of the language along the way.
Data Science
Introduction to Version Control With Git


Are you working with code? Do you wish there was a neater way to keep an old copy of your code around, in case you still needed them? Do you need to collaborate with your colleagues? This workshop is for you! We will introduce Git, a version control system, for tracking changes on your local machine. We will also briefly touch on how to use GitHub as a remote repository. Git keeps track of changes to code and free us from the burden of keeping multiple files with increasingly long and complex filenames. Even though version control systems originated in the world of software development, they're just as useful when working with research projects. You can also connect to a remote repository like GitHub, which allows you to keep a backup of your code and its history, sync across your devices, and have powerful features for collaborating with your colleagues. If you're planning to write any kind of code during your research, it's highly recommended you understand and use version control systems like Git and remote repositories like GitHub to improve the way you work and collaborate (and to make it more enjoyable). This is a beginner-friendly workshop - participants will benefit from having some basic experience with a command-line, but this isn't required.
Research Computing
Data Management Planning


Data Management Plans are a useful way of mapping out the collection, storage, analysis, and publication of research data. They surface important institutional or funder requirements, and ensure that project members are aware of their ethical and legal responsibilities when working with project data. This session will provide an overview of how Data Management Plans are a useful tool for researchers at all stages of their work, and in particular, when revisiting research data over time or onboarding new project members.
Data Science
Qualitative Methods
Health Care Data for Research at the University of Auckland


Please note this session is intended primarily for University of Auckland researchers. Administrative data are increasingly used to undertake research because of their considerable volume and variety and ability to be captured automatically, over time, and to be linked. Their use is not without challenges though. In this session, Katrina Poppe, Vanessa Selak, and Mazyar Zarepour draw on their use of New Zealand health care data for research to outline potential data sources, processes for access and data management and curation issues. They will briefly describe the recently established UoA Health Data Platform.
Research Computing
Data Science
Publications, Engagement & Impact
Unlocking the Potential of the Stats NZ Integrated Data Infrastructure (IDI) For Research


The Integrated Data Infrastructure (IDI) is a large research database holding de-identified microdata about people and households in New Zealand. The breadth of topics covered and the length of its timeseries make it a world-leading research database. Researchers use the IDI to study health, education, social services, justice, communities, population, income, housing, and the interactions between them. However, the very things that make the IDI a powerful tool for research can also make it a difficult environment for new researchers to begin working with. This workshop will introduce the IDI, explain how researchers are using it, and provide guidance to help new researchers get started. This workshop is aimed at anyone who is interested in hearing more about the IDI and how they might use it for research. Attendees will gain a better understanding of the sort of research possible with the IDI, how the data is accessed and protected, and useful tips and guidance from an experienced IDI user.
Data Science
Qualitative Methods
Maximising Research Impact with Wikipedia


Wikipedia has become one of the most important free and open sources of knowledge, and it’s impossible to ignore the impact it has had on the internet and society as a whole. Researchers, and in particular doctoral candidates, can gain much from engaging with Wikipedia and its sister projects. This session will introduce Wikipedia and how it relates to the world of research. We will explore how researchers can engage with Wikipedia to increase their impact and boost their research metrics, while improving coverage and representation of topics that interest them. Attendees will learn how to edit existing pages, while following best practices. If you’d like to have a go during the workshop please create an account beforehand (click the ‘Create account’ link at the top right on any Wikipedia page. We recommend choosing an anonymous name, and adding your email so you can recover your account if you lose your password).
Publications, Engagement & Impact
Introduction to Research Data Transfer & Data Sharing


This is a 1 hour workshop designed to help researchers move data securely and efficiently. Participants will be introduced to commonly used data transfer tools—including FileSender and Globus—and learn when and how to use each for different research scenarios. The session covers best practice for transferring or sharing large or sensitive datasets, managing access, and avoiding common pitfalls. Ideal for researchers, postgraduate students, and support staff who work with research data of any size
Research Computing
Cybersecurity
Working With Personally Identifiable Research Data


Working with personally identifiable information is common across many research disciplines and methodologies, but it comes with important considerations for privacy and data security. This session will provide an overview of the legal, ethical and policy requirements and best practices for working with personally identifiable data. We'll define the elements that can make data personally identifiable and how this is evolving with new technologies. The presenters will explore moving data across the spectrum of identifiable to deidentified to confidentialised, in the NZ context, in order to comply with a broad range of requirements and make it easier to work with the data.
Data Science
Qualitative Methods
Cybersecurity
Reproducible Data Workflows with Snakemake


Does your data analysis require several steps across various software? Do you need to run the same analysis repeatedly and reproducibly? These common scenarios in digital research can lead to complex manual processes with tedious file handling and a high chance of human error. Workflow languages solve these issues by automating your data analysis with code. They provide reproducibility by ensuring each workflow runs consistently every time. They allow you to organise your software, inputs, outputs and logging for clear versioning, reporting, and results. They are even self documenting, providing a clear illustration of how your whole workflow fits together. Finally they allow you to scale your workflows up for running on HPC such as REANNZ HPC. A well-defined workflow means you can set your full data analysis running and go make a cup of tea knowing you’ll come back to accurate outputs and clear logs. In this workshop, we will work through an introduction to Snakemake, a workflow language with its basis in the popular programming language, Python. This Workshop is intended for anyone who has several steps in their data analysis workflow, particularly when many different software tools are involved. Basic command line experience as provided in "Introduction To the Command Line" is highly recommended, but no other programming experience is required.
Research Computing
When Science Meets the Headlines: Media Engagement for Research Impact


Have you got a big paper coming out that you think is newsworthy? The Science Media Centre can help you get journalists’ attention. This session will introduce you to Scimex - our online portal for promoting embargoed research to registered journalists - alongside our other tools and resources for media engagement. You will learn some tips and tricks on what to do when a journalist calls, and how to ensure your expertise has impact.
Publications, Engagement & Impact
Beyond the script: Building Modern, Maintainable Python Projects


Moving from writing simple Python scripts to managing projects can feel like a leap into the unknown, but adopting a few professional workflows can make your code significantly more robust and maintainable. This 1-hour talk is designed for those who are comfortable with Python basics and are ready to transition from "scripting" to "development." Participants will explore the modern Python ecosystem, focusing on how to use the uv tool for package management and ruff for code formatting and linting. We’ll also cover the essentials of project architecture, the "why" behind type hinting, and how to ensure your code actually works using pytest. The aim is to introduce modern Python tooling and describe a more professional workflow. By the end of the session, you’ll have a clearer roadmap for learning how to better structure your next Python project.
Research Computing
Data Science
Package Your Research: Why And How To Turn Your Data Into An R Package


Developing an R package might seem like a task reserved for software engineers, but a basic data package can be created surprisingly quickly. Packaging up your data is one of the most effective ways for researchers to organise, document, and share their work, and it makes using one dataset across multiple projects much easier. This 1 hour talk is aimed at emerging researchers who want to make their data more accessible and reproducible, and who are comfortable with basic R and committing/pushing to GitHub. Participants will learn why creating packages is so useful, and an overview of how to create a simple data package. The aim is to demystify the process of package development and inspire participants to go on to create their own packages. A collection of step-by-step resources will be provided to help get you started on your package development journey.
Research Computing
Data Science
Geopolitics Meets Research: Why Research Security Matters Now


International collaboration is the lifeblood of modern research, yet the global landscape is changing. As universities become central to geopolitical strategy, the "open doors" of the past are being replaced by a model of managed openness. For researchers, this shift can feel like a maze of new risks and policies. Using practical examples from the New Zealand and Australasian context, we’ll explore how research security enables safe, responsible collaboration while addressing risks such as foreign interference, talent programmes, and IP leakage. We’ll discuss what researchers can do to protect their work, reputations, and institutions, while still participating in international collaborations that drive high-impact work.
Cybersecurity
Investigating Organic and Inorganic Chemistry Mechanisms Computationally using ORCA


Are you interested in learning how to investigate reaction mechanisms and transition states of organic and inorganic reactions using computational chemistry? Come and join us for a hands-on, online tutorial where we will learn how to: Run density functional theory (DFT) calculations using ORCA on a high-performance computer (HPC) cluster. Obtain the transition state and activation energy of a chemical reaction using ORCA Use Mahuika OnDemand to view your results directly using Avogadro. Follow [this link](https://geoffreyweal.github.io/ORCA_Mechanism_Procedure/) to know more about what the workshop will cover.
Research Computing
Data Science
Responsible AI in Research


This 1 hour online workshop explores how researchers can use AI tools responsibly to improve the research process. Attendees will be introduced to how AI tools work, their capabilities and limitations, and the risks of producing incorrect or biased outputs. We’ll explore some examples of how AI can be applied to the research process, and the types of considerations that are important. We'll finish by signposting relevant University policies and where researchers can go for more information. 
Artifical Intelligence
Upgrading Your R Scripts With Software Development Principles


Maturing from a beginner R programmer to an intermediate-level R developer can be a difficult transition to make. What sorts of concepts should you spend time learning about, and what sorts of practices will help you to level-up your programming game? This 1-hour talk is aimed at researchers who are comfortable with basic R syntax but find themselves overwhelmed by what to do next to improve their scripts. We'll introduce RStudio Projects, managing code history with version control, breaking down scripts into functions, using dependency management tools, and publishing your code repositories to promote your work and give back. Adding these skills and practices to your toolbox will save you time, help you to collaborate more effectively on analyses, and ensure your findings remain reproducible years after publication.
Research Computing
Data Science
Precision Prompting: Mastering Generative AI for the Research Lifecycle


Generative AI tools are now ubiquitous, but getting them to produce high-quality, reliable research outputs remains a challenge. If you’ve ever been frustrated by "hallucinations" or generic responses, this session is for you. We move beyond simple "chatting" to explore Prompt Engineering—the science of "programming with words." We will dive into advanced techniques like Chain-of-Thought reasoning and Few-Shot prompting to help you automate literature summaries, debug complex code, and refine manuscript abstracts with scientific precision.
Artifical Intelligence
Using LLMs to programmatically extract and curate research data


Spinning raw data into analysis-ready gold often takes far more time than anticipated. These steps don’t often show up in the methods but are critical for robust research results. Whether you're working with messy survey responses, archival documents, or image collections, transforming unstructured material into clean, structured variables is painstaking work that manual methods handle poorly and traditional programming approaches struggle to scale. Large language models change this. Used programmatically, they can extract structured features from text, interpret images, and produce consistent, usable datasets with more flexibility than rule-based approaches. In this hands-on two-hour workshop, you'll work with real New Zealand text and image data to extract meaningful features using free, cloud-based tools. You'll leave with reusable code you can adapt to your own research data, whatever your discipline. The goal is to demystify programmatic LLM use and give you a practical foundation you can build on immediately. Prerequisites: Basic Python programming experience will greatly assist in participation of this workshop. Set up: As we'll will be using free Google tools, you will be required to use a Google account to participate.
Artifical Intelligence
Introduction to JupyterLite


Do you use Jupyter but wish there was an easier way to share notebooks with interactive outputs—for research or teaching—without setting up a server or asking users to install anything? JupyterLite is a WebAssembly-based version of Jupyter that runs entirely in the browser. This 1-hour workshop will show you how to get it up and running on GitHub Pages, walk through a few examples, and finish with Q&A.
Research Computing
Life Reset – Navigating the Professional Job Market


Many early-career researchers find themselves questioning their readiness for professional roles, especially when exploring opportunities beyond a single, clearly defined pathway. Feelings of being behind, underqualified, or unsure where to start are common. This workshop introduces a practical, step-by-step approach to job searching that shifts the focus from vacancy scrolling to employer research and strategic exploration. Career Consultants based in the Unviersity of Auckland will show you how to use LinkedIn to assess role requirements, clarify what “qualified” really means, and prepare for the realities of a professional job search with greater confidence and direction.
Publications, Engagement & Impact
The Five Safes Framework in Action: A Tour of a Secure Research Environment


Balancing privacy, legal, and ethics requirements with the need to access and analyse sensitive data across a project team can be challenging. This 1‑hour session is for researchers working with sensitive data who want a clear approach to data privacy and security. We will introduce the Five Safes framework (Safe Projects, People, Settings, Data, and Outputs) and demonstrate how these principles are practically applied within a Secure Research Environment. By the end of the session, you’ll have a clearer picture of how you can use secure infrastructure and how recognised approaches like the Five Safes can help meet data providers, funder and overseas partner requirements.
Research Computing
Cybersecurity
A Practical Guide to Study Preregistration


Many research literatures, across disciplines, are full of findings that cannot be replicated. A major contributor to this “replication crisis” is the use of questionable research practices that inflate statistical significance, meaning that many published findings are not “real”. Preregistration is one solution to this problem. A preregistration is a time-stamped document outlining a study’s hypotheses, sample size, methodology, and analytical plan before data collection begins. It therefore reduces researcher flexibility and increases the credibility of research findings. It is increasingly required or recommended by journals and funders, and is a valuable practice for all researchers. In this practical session I’ll introduce the options for where to post your preregistration and walk through some of the trickier decisions that must be made, including sample size determination, stopping rules, data exclusions, violations of assumptions, and exploratory analyses.
Publications, Engagement & Impact