Introduction

A brief overview of brainrender`s functionality

This documentation is for the new brainrender 2.0!

If are getting started with brainrender, carry on. But if you have been using previous versions of the software, you might want to check the guide we've put together to transition to the new brainrender.

This section explains the logic behind how brainrender works. If you want more details about which classes/methods are part of each script, and what they do, check the automatically generated docs here.

A core design goal is to facilitate the rendering of any data registered to a reference atlas. To this end, brainrender facilitates the creation of 3D objects from many different types of data (e.g. cell locations, brain regions) within minimal need for the development of dedicated code. In addition, brainrender is fully integrated with BrainGlobe's atlasAPI ensuring that you can use brainrender with any atlas supported by the API with no need for any changes in your code.

Overview of brainrender's workflow

The general workflow for any brainrender visualization consists of just a few steps:

  1. Load your data and generate a brainrender Actor. This can be done using custom code, or with the dedicated Actor classes provided by brainrender which can be used to render most types of data.

  2. Add your data to a brainrender Scene

  3. Render your scene, or use it to create animated videos.

To learn more in detail how to use brainrender, keep reading this documentation and when you're ready check out the examples at the GitHub repository.

Using Notebooks

brainrender can be used with Jupyter notebooks, but some care must be used when doing that. Find more details here.

Getting in touch

For any question, issue or bug report you can get in touch on the github repo or on twitter.

Referencing brainrender

If you've found brainrender useful in your work, please cite brainrender's publication(s).

Check here for more details.