R Exchange aims to showcase how organisations across a variety of disciplines are using R, Python and related tools to create insights for decision-making and align them with strategic goals. Learn with us how to better visualise outputs, automate reporting, and scale up workflows and data products. Open source tools are commonly used to complement proprietary products to tackle more complex problems providing analysts and data scientists with the flexibility they need for their work. If you haven’t worked with open-source tools yet, get inspired by how others use them and what can be done.

Whether you're taking first steps or already are an expert, our diverse programme caters for different levels of skill. Meet with fellow open-source enthusiasts to connect and learn about best practices and what is new in the open-source data science and analytics world.

This year will bring some exciting new speakers, dedicated breakout room sessions (NEW), insights from the team at Posit and as always good coffee combined with plenty opportunity for networking and discussions. 

If you have any questions, please don’t hesitate to contact us events@epi-interactive.com and keep up-to-date on the latest announcements via LinkedIn and our newsletter.

Where

The Auditorium,
Aitken Street entrance,
National Library of New Zealand,
Wellington

When

Thursday 15 May 2025

Speakers

Garrick Aden-Buie
Posit
Software Engineer
Benjamin Gin
Fonterra Co-operative Group
Digital Platform Lead
Siqi Liang
Fonterra Co-operative Group
Digital Platform Engineer
Petra Muellner
Epi-interactive
Director (Science & Data)
Nick Snellgrove
Epi-interactive
Tech Lead
Olivia Angelin-Bonnet
Plant & Food Research
Statistical Scientist

Schedule

8:15am

Doors opens

Come early for a chat with your peers over a nice cup of coffee or tea

9:00am

Welcome / Open-source data science in a disrupted world

9:30am

Shiny + AI: Building LLM-Powered Apps in R

Garrick Aden-Buie

Posit

AI tools like Copilot, ChatGPT, Deepseek, and Claude have taken the world by storm. They’re powered by Large Language Models (LLMs), models that have a simple premise of generating plausible sounding text and yet are surprisingly versatile. LLMs can be used for all kinds of open-ended tasks, inspiring new approaches and promising productivity gains.

To harness the power of LLMs, you can access them through web APIs and integrate them into your own code. However, working directly with these APIs can be challenging. Fortunately, R packages like {ellmer} and {shinychat} are simplifying LLM integration in R. They handle the complexities of tasks like building chat interfaces and structuring outputs. And with Shiny for R, you can quickly create custom chatbots and seamlessly incorporate them into your web application.

In this talk, I’ll show you how easy it is to create LLM-powered apps with R and Shiny, even if you’re new to it. I’ll give you ideas for your next AI project, explain how to write good prompts to get better results, and show you why R is a great choice for code-first data scientists in 2025.

 

9:55am

From siloed use of R to a Community with IT support

Benjamin Gin & Siqi Liang

Fonterra Co-operative Group

Fonterra is a large company and it is inevitable that tools get used in isolation. In about 2012 a small group of individuals progressively built up internal R packages then saw the potential to share these with the rest of Fonterra, via a CRAN-like repository that we like to call "FRAN". The installation of R working behind a corporate proxy was also difficult, and this small group of individuals figured out the configurations and then worked with IT to package R to be deployed through the company Software Centre.

 

That same small group of individuals then started building Shiny apps and showcased this to an upcoming project team who then utilised these individulas as embedded resources to help build and deploy apps to a Shiny Server Pro server. Out of the business value built, the Digital Platform Team was formed under IT to support this work. This was a journey that started in 2012 to the formation of the Digital Platform Team in 2023.

 

With the Fonterra IT direction to use Microsoft Azure Web Apps (as opposed to a Posit Connect server), existing Shiny apps now need to be migrated and any new apps are built into Containers to run as Azure Web Apps. Shiny apps are developed on a local development environment, and then Dockerised to build Containers that are then deployed as Azure web Apps. Care needs to be taken to ensure that Shiny Apps are written in such a way that they are isolated for a user, and can be accessed concurrently by multiple users. Shiny Cannon and Shiny Load Test are used to test concurrent usage.

10:20am

Session Q&A

10:30am

Morning tea break and networking

11:15am

Breakout rooms

Shiny in production
Taking your Shiny app from an analytical idea or research output to a fully operational, well-designed piece of software accessed by many users can be highly frustrating, and is often where things fall apart, despite skill and enthusiasm. In this interactive breakout session we will investigate the pain points and practical solutions involved with scaling a Shiny app into production. We will touch on topics like data access, code modularisation, user experience, performance profiling and tuning.

 

Crafting open-source data science environments
A productive and empowering data science environment within an organisation requires the orchestration of different tools, systems and processes, which span from IDEs, R / Python package management, hosting, security to data access and storage. It’s important these elements are well set up and embedded to enable data scientists, analysts or researchers to create valuable insights. Come join the discussion about critical factors, options and practices to better connect data, science and people.

 

Leading data science teams
Leading, developing and motivating a multi-disciplinary team including data analysts, researchers and software developers is not an easy feat. Challenges include integrating the different roles required, creating the right infrastructure to support the team and offering continuing professional development to stay up-to-date. Join this session to share your experience and learn form others what works and what doesn’t work.

12:30pm

Networking lunch

2pm

Ripple - Using open-source tools to turn complex data into applied intelligence

Petra Muellner & Nick Snellgrove

Epi-interactive

eDNA data has the potential to improve our ability to make science-based decisions across a wide range of objectives – from community efforts to biosecurity risk management to large scale ecological and conservation measures. However, if the insights created are fragmented and do not connect with decision making in accessible ways, eDNA will fail to live up to its potential.

 

The data outputs of metabarcoding data are undeniable, but how these are converted into outcomes for the technology is still a work in progress. The core challenge in making eDNA accessible is the need to embrace complexity yet distil data and insights into a format that is easy to digest, visual and connects eDNA with a wide diversity of stakeholder groups. 

 

Highly customized code-driven dashboards built with open-source software such as R, create an opportunity to bridge this gap and drive collaboration and stakeholder engagement. Utilizing know-how from the development of complex disease intelligence tools such as OpenFMD and Epidemix we developed an open-source based application that allows users to share and visualise eDNA data.  The functionality can be highly customized to meet the wide range of activities that eDNA can support – from  incursion management to state of the environment reporting.

 

In this talk we share both the front-end challenges as well as the dev ops practices required to scale Ripple from an idea to a full-blown data science solution.

2:30pm

Reproducible multi-omics integration with the moiraine R package

Olivia Angelin-Bonnet

Plant & Food Research

The integrated analysis of multi-omics datasets is a challenging problem, owing to their complexity and high dimensionality. Numerous integration tools have been developed, but they are heterogeneous in terms of their underlying statistical methodology, input data requirements, implementation, and visual representation of the results generated.

 

To address this, I present the moiraine R package, which provides a framework to consistently visualise and integrate multi-omics datasets and compare the results of different integration tools. In particular, moiraine uses available metadata to construct context-rich visualisations, which facilitate the interpretation of the results by domain experts. The package has been designed to enable reproducible research, through the construction of targets pipelines. A comprehensive documentation is available online in the form of a Quarto book.

2:45pm

TBC

3pm

Session Q&A

3:10pm

Panel discussion

Interactive discussion with speakers about recent developments, challenges and opportunities

3:40–5pm

Networking

Nibbles and drinks provided

Is R Exchange for me?

The numbers speak for themselves.

Established in 2020, R Exchange has been attended by over 250 participants from 65 organisations. 

It is quite humbling to see how this event has grown so much in only a few years. We remember the first R Exchange, when we were in a small, 30-person room on the Terrace and stopped at the supermarket on the way to the event to get some baking for the breaks.

The event would not exist without the enthusiasm of New Zealand’s open-source data science community and it is a real privilege for us to organise and host this event as part of our B Corp commitment to support local communities and to be about more than just profit.

Here's what our attendees say

"This was my second year attending and I very much enjoyed it. We work on an application that uses R in a bespoke manner for reporting purposes, so it’s highly useful to see the ways R is used elsewhere across the public service."

"I have definitely gotten a lot of connections and learnings out of it. I even felt motivated to do some sharing in the future too! Looking forward to next years' R Exchange."

"Many companies try to spearhead their respective user communities but it’s almost always artificial and profit driven. You can tell in attending R Exchange how much you are truly trying to support all the R community, not just current and potential customers. Huge respect!"