Session List

Key Story - Harnessing the disruptive nature of portable sequencing for community empowerment

The current 'climate' is full of buzz words, such as: AI (artificial intelligence); deep learning; cloud computing, and the 'Internet of Things'. As consumers, and even research specialists, this can all be overwhelming. At ESR we are endeavouring to provide our staff, clients, and hopefully the wider community, with some insight into the technologies behind this jargon. In this talk I will discuss our experiences with the Nvidia Jetson family of small embedded computing platforms. What started as an idea to address a very personal need has developed into an international, collaborative, cross-programme study to develop and deploy an innovative, disruptive, portable and affordable sequencing technology into the hands of the community to empower their health and well-being. Additionally, the affordable, easy to source components provide exciting opportunities for such endeavours as community outreach and education. If you want a sneak peak visit this GitHub repository. Book here
Key story - From Classics to Computer Science and back again

A self-described computational philogoist, Thomas is an expert in digital humanities, data and stylometric analysis, and historical language processing. This Key Story will chart his multi-faceted research trajectory: from a 13 year old facing the difficult choice between learning ancient Greek or being trained in Computer Science, to a successful academic with positions at Harvard and the University of Leipzig, where he has been able to combine these dual interests in novel and powerful ways. More about Thomas can be found here. Book here
Performance - Music performance and project showcase

In this performance, 20+ year old 'e-waste' video and audio circuits are removed from the waste stream and resuscitated, miss-wired, and 'incorrectly' interconnected. The results of this process create audio-visual signals with 'sentient' or 'generative' characteristics. The voices and cinema of these discarded circuits are broadcast and redistributed through this YouTube Live feed. Interaction: If you're watching this from a computer you can also access an in-progress interactive user interface where you can influence this system. The ultra ultra pre-beta version of that (in fluctuating states of (dis)repair) is located here . Sign up here
Key story - Exploring the cultural horizons of open science: your research and your life

Open science is a lot more than open access. Sci-hub is not the goal. Open science unfolds new demands for culture change in the academy; it builds a new pathway toward more generous and joyful research practices and outcomes. Science is too consequential to be colonized by the neoliberal marketplace. I will open up a conversation about an alternative open science economy, and about the shared cultural practices—past and future science norms—that can make all of our research collectively better, quicker, more rigorous, and a lot more fun. From the infinite game of science—which you get to be paid to play—to the fierce equality of future global research that puts everyone’s work on a planetary stage: open science has really just begun (and Sci-hub is still kinda cool). Background content is available. Book here
Welcome to ResBaz 2020 : Pick n Mix

A brief welcome to the Research Bazaar NZ 2020 - a collaborative effort to bring free, open, online researcher skills sessions to the NZ research community. Feel free to drop-in and ask questions. Zoom link
Research Data Management (Part 1): Planning, Organising & Storing

Research data: that which is created, collected or observed in the course of producing original research, regardless of format. This introductory workshop is aimed at researchers, particularly those embarking on their research career or starting a new research project. Attendees will develop strategies for capturing and organising research data, sharing and reusing data, and have an opportunity to draft a Data Management Plan (DMP). You will be introduced to data management concepts, best practices, services and useful tools to support you managing and sharing your research data. Book here
Crash course in LaTeX using Overleaf

Learn the basics of LaTeX and Overleaf in this workshop. We will create a template that you can use for both articles and a thesis. Book here
Research Data Management (Part 2) - Sharing, archiving and publishing

Researchers are increasingly being asked by funders, publishers and their institutions to share their research data. Come along to this workshop to learn about how you can prepare and disseminate your research findings; increase your research impact through data publication; and, learn about services available to you to achieve this. Book here
Introduction to Nectar virtual machines (- UoA only)

Is your desktop/laptop struggling to perform research analysis?

Using a virtual machine could help.

The Centre for eResearch is offering a 1-hour online workshop and practical session to enable University of Auckland researchers to use Nectar Research Cloud (virtual machines).
This session is ideal for those considering or starting to use Nectar Research Cloud, presenters will provide practical instruction on getting an allocation and setting up a Windows virtual machine.
Please register with your University of Auckland email address.
Book here

Tidy data and introduction to OpenRefine

Good data organisation is the foundation of any research project. We often organise data in spreadsheets in ways that we as humans want to work with it, but computers require data be organised in particular ways. In order to use tools that make computation more efficient such as programming languages like R or Python, we need to structure our data the way that computers need it. Since this is where most research projects start, this is where we want to start too!

Preparing data for analysis is an important part of the research workflow. Some of this involves data cleaning, where errors in the data are identifed and corrected or formatting made consistent. OpenRefine is a powerful free and open source tool for working with messy data: cleaning it and transforming it from one format into another. This lesson will teach you to use OpenRefine to effectively clean and format data and automatically track any changes that you make. Many people comment that this tool saves them literally months of work trying to make these edits by hand. Book here
Getting started with the bash shell

An introduction to using the linux command-line, with one of the popular shells: the bash shell.

We'll cover navigating the filesystem, moving files around, working with file permissions, finding files, bash history, and more if there is time.

This demonstration will be done on the linux operating system (OS):to be able to follow along, you should have access to a bash shell from your own computer, either

* directly on a linux operating system

* using a terminal on a Mac,

* or on a Windows OS, using

  - the Windows Subsystem for Linux (see https://docs.microsoft.com/en-us/windows/wsl/install-win10),

  - using a bootable version of Ubuntu on a usb drive: (see here). Book here
Using Qualtrics for creating surveys?

We'll provide an overview of Qualtrics - a web-based survey tool for conducting surveys for research, evaluations and other data collection activities. Join us for this community building session - speakers will share their experiences of using Qualtrics to create large scale surveys and invite the audience to ask questions and share tips. Book here
R for Social Scientists

This session will introduction R for participants with no programming experience. We hope to cover information about R syntax, the RStudio interface, and move through how to import CSV files, what a data frame is, how to deal with factors, how to add/remove rows and columns, how to calculate summary statistics from a data frame, and a brief introduction to plotting. Book here
Research Compute - overview of University of Auckland options

Need more computer power to do your analysis? Is your laptop/desktop struggling to run your analysis? Come along to hear about virtual machines and High Performance Computing options available to University of Auckland researchers and research postgraduate students. Book here
Data Analysis with Jupyter Notebooks

Jupyter Notebooks are an open source web application that you can use to create and share documents that contain live code, equations, visualizations, and text. This session will give an overview of how and why these notebooks can be useful for research and analysis, and how you can unlock the power of the functionality they offer. Book here
Literature reviews with NVivo

If you’re buried under piles of literature and can’t see the wood for the trees, NVivo might just have something to offer you. While NVivo is primarily designed to analyse qualitative data, much of its functionality is applicable to literature reviews. It won’t analyse the literature or write the review for you, but it has a range of features that will assist you in working with PDFs and notes relating to your review. The ability to run queries on your literature is particularly powerful, and there are a range of useful visualisation tools also. This one-hour webinar will introduce the software and demonstrate some of its key functionality in relation to literature reviews. Experience with NVivo will be helpful, but is not required. Book here
Less is more: creating infographics

Tips and tricks to help you communicate your data more effectively. Book here
Social Network Analysis - an introduction

Intended to be a very general introduction to social network analysis (SNA). The session will be geared towards researchers interested in what network analysis is all about from both theoretical and practical perspectives. Jon will talk about the characteristics of social networks and some well-known positional measures in social networks. A quick“lay of the land” and idea of where to go if SNA is something potentially relevant to your developing research. I’ll also outline some useful tools and frameworks that you’ll find useful. Book here
Working with social media data?

Social media data can enable insightful research into areas like social behaviour and current events. How do you get started? And what are your experiences with the technical knowledge and costs required for collecting, cleaning and analysing social media datasets? Book here
Hacky Hour - Questions

Zoom link
Linux command line - beyond the basics/ Bash shell - some more basic tools

In this second session, you'll explore a few basic but powerful command-line tools that the linux shell provides

regular expressions: grep, wildcards

sed: stream editing with regular expressions

awk: handling formatted data - either data files or output of bash commands.

We'll illustrate using these at the command line and as part of bash scripts. Book here

Digital Storytelling and Data Visualisation with KnightLab

Use open source tools to create interactive narratives and visualisations for your research. This session provides an overview of a range of free, easy-to-use tools from KnightLab useful for time or location-based narratives. Learn the basics with the option to explore more advanced features, and collaborate on creating a TimeLine. Book here
Data publishing tips and Q&A - drop-in session

Pop by for an overview on data publishing - why, what, where and how. Bring along your questions to this drop-in session. Zoom link
Workshop: Introduction to Open Refine (3hours)

OpenRefine is a free, open-source powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data. This introductory, follow-along workshop will demonstrate how it can help you: Get an overview of a data set and resolve inconsistencies; split data up into more granular parts; match local data up to other data sets; and enhance a data set with data from other sources. Book here
How can Python help your research

A holistic overview of the Python programming language aimed at those who are new to Python and interested in how it can help with their research. No prior knowledge of Python required, no hands-on coding involved, just sit back with your favourite beverage and find out about: key concepts in Python, how Python is used around the world, and how Python can help you organise, analyse, and visualise your research data. Book here
Workshop: Topic Modelling with Tidy ToPān (3hours)

Introductory follow-along workshop on Topic Modelling, using Tidy ToPān. ToPān is Topic Modelling for everyone: from people without programming knowledge to people that want to build teaching and text-reuse tools and apps based on Topic Modelling data without having to develop their own tool or having to majorly restructure their textual data. Installation instructions. Book here
Research Portfolio website (2hours)

Learn how to create a attractive, functional website to showcase your research career and outputs using GitHub and wowchemy. Some familiarity with git, HTML and CSS will be beneficial, but not necessary. Book here
Introduction to Topic Modelling

Topic modeling is a frequently used tool for discovering hidden structures in text-based material. Learn how it's used in a range of research contexts, including the digital humanities and social sciences. Book here
Tidy data with spreadsheets (2hours)

This lesson is designed for those interested in working with data in spreadsheets, using library data for reference.

It will demonstrate good practice for using spreadsheet programs for data wrangling, formatting data tables in spreadsheets, and dealing with dates as data.

This is a very useful grounding for anyone who makes, uses, or contributes towards work/projects that use data, especially in spreadsheets.

Must have access to spreadsheet software, ideally Excel. Book here
Intro to XML and JSON (2hours)

"What is XML? How does XML work? How can I use XML? What can I use XML for?"

"What is JSON? How does JSON work? How can I use JSON? What can I use JSON for?"

Useful for anyone who wants/needs to engage with data in XML or JSON formats and wants a guided tour of formats, and how to create/read/use them. Book here
How to plan for real world impact!

Economic, societal and environmental impact, or the ‘non-academic’ impact of research, is becoming an increasingly important part of the research ecosystem. It is standard practice for researchers to be asked by funders to describe the benefits of their research and how they might enable that benefit to be achieved. This session offers a high-level step-by-step guide on how to incorporate impact into your research planning. Book here
Workflow languages – your foundation for accuracy and reproducibility in data analysis

Are you working with big data? Do you need to pass your data through various software? If you’ve ever been in this situation (as I have in a population genetics masters project), you would know it can become very difficult to maintain reproducibility and accuracy; wait, have I updated this output file? The more manual steps we do, the more human errors are inevitably introduced into our analysis, hampering accuracy and reproducibility.

Be lazy, the machine does it better.

Workflow languages automate your data analysis workflow . But this isn’t all, they ensure all your analysis logs are captured in an organised fashion, they explicitly outline the software (and exact software versions) used, the input and output files at each step. Lastly, when your data inevitably becomes big data, you can easily scale up from running your analysis on your laptop, to running your analysis on a high performance cluster (HPC) such as NeSi.

In this workshop, we will work through an introduction to Snakemake, a workflow language with its basis in the popular programming language, Python. This Workshop is intended for anyone who has several steps in their data analysis workflow, particularly when many different software are involved. Book here
Tour of New Zealand eScience Infrastructure

NeSI’s goal is to raise the capability of NZ based researchers  - but how?

In this session we will give attendees a tour of our most popular offerings. Come along to learn how NeSI’s friendly team can help you overcome common (yet tricky) research challenges including:

- Accessing NeSI’s High Performance Compute platforms for the first time

- Transferring large amounts of research data

- How to apply machine learning methods in your project on the HPC platform

- Getting support as you develop or fine tune code to run effectively on HPCs

- Staying up to date on community events and offerings

If any of these topics sound intriguing, please be sure to join the call. Several NeSI staff will be online to share their expertise and we would be thrilled to hear how NeSI can support you on your research journey. Book here
Social Media for Research Networking and Engagement

What does social media do for research impact and networking that more traditional forms of communication don't?

This session will explore how to make the most of social media in a research context, including information on: getting information, providing information, networking with other academics, engaging with communities outside of academia, and finding your tribe. It will also address how you can manage the distinction between personal and professional on social media, and how to mitigate the dangers of using it. Book here
Bash scripting for researchers (90mins)

Bash is a powerful tool that enables users to quickly automate tasks that would otherwise be labour intensive. We will demonstrate how you can use Bash to write time and effort-saving computer scripts. A little Bash knowledge goes a long way.

What do we mean by computer scripts?


If you have a collection of commands you'd like to run together, you can combine them in a script and run them all at once. You can also pass arguments to the script so that it can operate on different files or other input. A script is a handy way to:
• Save yourself typing on a group of commands you often run together.

• Remember complicated commands, so you don't have to look up, or risk forgetting, the particular syntax each time you use it.

• Use control structures, like loops and case statements, to allow your scripts to do complex jobs. Writing these structures into a script can make them more convenient to type and easier to read.

This session is open to students and researchers from all domains and will build upon the content covered in the introduction to Linux command-line session happening online on 23 Nov. Book here

Research data collection & surveys with REDCap - an overview

A brief overview of REDCap, a software tool which can help you create and manage research databases and participant surveys with sensitive data. Book here
Creating Professional LaTeX Reports Without Losing Hair (2hours)

Go beyond the basics in this workshop. You will learn how LaTeX documents can be split up into styles, pages, appendices, etc. so that you can work in manageable chunks. This workshop will demonstrate several useful LaTeX packages and show you how to create documents without Overleaf. Prior knowledge in Bash and LaTeX is helpful, but not required. A template will be provided for participants. Book here