Skip to main content

Contributing packages

The contribution process can be broken down into three steps:

  • Step 1. Staging process (add recipe and license).

    With the help of the staging process, add a package's recipe and license to the staged-recipes repository and create a PR.

  • Step 2. Post staging process.

    Once your PR, has been merged, take a look at our Post staging process to know what follows.

  • Step 3. Maintaining the package.

    Contributing a package to conda-forge makes you the maintainer of that package. Learn more about the roles of a maintainer.

The sections below will add more details about each step.

The staging process

The staging process i.e adding a package's recipe has three steps:

  1. Generating the recipe
  2. Checklist
  3. Feedback and revision

Generating the recipe

There are, currently, three ways to generate a recipe:

  1. If it is an R package from CRAN, kindly start by using the conda-forge helper script for R recipes instead. Then if necessary, you can make manual edits to the recipe.

  2. If it is a python package, you can generate the recipe as a starting point with grayskull.

    note

    Grayskull is an automatic conda recipe generator. The goal of this project is to generate concise recipes for conda-forge and eventually replace conda skeleton. Presently, Grayskull can generate recipes for Python packages available on PyPI and also those not published on PyPI and only available as GitHub repositories.

    Installation and usage of grayskull:

    • Create a new environment using : conda create --name MY_ENV. Replace MY_ENV with the environment name.
    • Activate this new environment : conda activate MY_ENV.
    • Run conda install -c conda-forge grayskull to install grayskull.
    • Followed by grayskull pypi --use-v1-format --strict-conda-forge YOUR_PACKAGE_NAME to generate the recipe. Replace YOUR_PACKAGE_NAME with the package name.

    You do not necessarily have to use grayskull, and the recipes produced by grayskull might need to be reviewed and edited. Read more about grayskull and how to use it here.

  3. If it's none of the above, generate a recipe with the help of the example v1 recipe (or the example v0 recipe, if you wish to use the v0 format) in the staged-recipes repository and modify it as necessary.

Your final recipe should have no comments (unless they're actually relevant to the recipe, and not generic instruction comments), and follow the order in the example.

note

If there are any details you are not sure about please create a pull request anyway. The conda-forge team will review it and help you make changes to it.

In case you are building your first recipe using conda-forge, a step-by-step instruction and checklist that will help you with a successful build is provided below.

Step-by-step Instructions

  1. Ensure your source code can be downloaded as a single file. Source code should be downloadable as an archive (.tar.gz, .zip, .tar.bz2, .tar.xz) or tagged on GitHub, to ensure that it can be verified. (For further detail, see Build from tarballs, not repos).
  2. Fork and clone the staged-recipes repository from GitHub.
  3. Checkout a new branch from the staged-recipes main branch.
  4. In the same terminal, go to the staged-recipes/recipes directory.
  5. Within your forked copy, create a new folder in the recipes folder for your package (i.e, ...staged-recipes/recipes/<name-of-package>)
  6. Generate the recipe or copy the example recipe.yaml (or meta.yaml) into this folder. All the changes in the following steps will happen in the COPIED recipe.yaml or meta.yaml (i.e., ...staged-recipes/recipes/<name-of-package>/recipe.yaml). Please leave the example directory unchanged!
  7. Modify the copied recipe file as needed. To see how to modify it, take a look at The recipe (recipe.yaml or meta.yaml) section.
  8. Generate the SHA256 hash for your source code archive, as described in the example recipe using the openssl tool. As an alternative, you can also go to the package description on PyPI from which you can directly copy the SHA256.
  9. Be sure to fill in the test section. The simplest test will simply test that the top-level package or module can be imported, as described in the example. Some projects have an empty top-level package; in those cases, make sure to select also some other modules that do contain runnable code.
  10. Remove all irrelevant comments in the recipe file.
tip

Be sure not to checksum the redirection page. Therefore use, for example,:

curl -sL https://github.com/username/reponame/archive/vX.X.X.tar.gz | openssl sha256

Checklist

  • Ensure that the license has the right case and that the license is correct. Note that case sensitive inputs are required (e.g. Apache-2.0 rather than APACHE 2.0). Using SPDX identifiers for license field is recommended. (see SPDX Identifiers and Expressions)
  • Ensure that you have included a license file if your license requires one – most do. (see v0 example or v1 example)
  • In case your project has tests included, you need to decide if these tests should be executed while building the conda-forge feedstock.
  • Make sure that all tests pass successfully at least on your development machine.
  • Recommended: run the test locally on your source code to ensure the recipe works locally (see Running tests locally for staged recipes).
  • Make sure that your changes do not interfere with other recipes that are in the recipes folder (e.g. the example recipe).

Feedback and revision

Once you finished your PR, all you have to do is wait for feedback from our review team.

The review team will assist you by pointing out improvements and answering questions. Once the package is ready, the reviewers will approve and merge your pull request.

After merging the PR, our CI infrastructure will build the package and make it available in the conda-channel.

note

If you have questions or have not heard back for a while, you can notify us by including @conda-forge/staged-recipes in your GitHub message.

Post staging process

  • After the PR is merged, our CI services will create a new git repo automatically. For example, the recipe for a package named pydstool will be moved to a new repository https://github.com/conda-forge/pydstool-feedstock. This process is automated through a CI job on the conda-forge/staged-recipes repo. It sometimes fails due to API rate limits and will automatically retry itself. If your feedstock has not been created after a day or so, please get in touch with the conda-forge/core team for help.
  • CI services will be enabled automatically and a build will be triggered automatically which will build the conda package and upload to https://anaconda.org/conda-forge
  • If this is your first contribution, you will be added to the conda-forge team and given access to the CI services so that you can stop and restart builds. You will also be given commit rights to the new git repository.
  • If you want to make a change to the recipe, send a PR to the git repository from a fork. Branches of the main repository are used for maintaining different versions only.

Feedstock repository structure

Once the PR containing the recipe for a package is merged in the staged-recipes repository, a new repository is created automatically called <package-name>-feedstock. A feedstock is made up of a conda recipe (the instructions on what and how to build the package) and the necessary configuration files for automatic builds using freely available continuous integration (CI) services.

Each feedstock contains various files that are generated automatically using our automated provisioning tool conda-smithy. Broadly every feedstock has the following files:

recipe

This folder contains the recipe file (recipe.yaml or meta.yaml) and any other files/scripts needed to build the package.

LICENSE.txt

This file is the license for the recipe itself. This license is different from the package license, which you define while submitting the package recipe using license_file in the meta.yaml file.

CI-files

These are the CI configuration files for service providers like Azure.

conda-forge.yml

This file is used to configure how the feedstock is set up and built. Making any changes in this file usually requires Rerendering feedstocks.

Maintainer role

The maintainer's job is to:

  • Keep the feedstock updated by merging eventual maintenance PRs from conda-forge's bots.
  • Keep the feedstock on par with new releases of the source package by:
    • Bumping the version number and checksum.
    • Making sure that the feedstock's requirements stay accurate.
    • Make sure the test requirements match those of the updated package.
  • Answer eventual questions about the package on the feedstock issue tracker.

Adding multiple packages at once

If you would like to add more than one related packages, they can be added to staged-recipes in a single pull request (in separate directories). If the packages are interdependent (i.e. one package being added lists one or more of the other packages being added as a requirement), the build script will be able to locate the dependencies that are only present within staged-recipes as long as the builds finish in the dependencies order. Using a single pull request allows you to quickly get packages set up without waiting for each package in a dependency chain to be reviewed, built, and added to the conda-forge channel before starting the process over with the next recipe in the chain.

note

When PRs with multiple interdependent recipes are merged, there may be an error if a build finishes before its dependency is built. If this occurs, you can trigger a new build by pushing an empty commit.

git commit --amend --no-edit && git push --force

Synchronizing fork for future use

If you would like to add additional packages in the future, you will need to reset your fork of staged-recipes before creating a new branch on your fork, adding the new package directory/recipe, and creating a pull request. This step ensures you have the most recent version of the tools and configuration files contained in the staged-recipes repository and makes the pull request much easier to review. The following steps will reset your fork of staged-recipes and should be executed from within a clone of your forked staged-recipes directory.

  1. Checkout your main branch:
    git checkout main
  2. Define the conda-forge/staged-recipes repository as upstream (if you have not already done so).:
    git remote add upstream https://github.com/conda-forge/staged-recipes.git
  3. Pull all of the upstream commits from the upstream main branch.:
    git pull --rebase upstream main
  4. Push all of the changes to your fork on GitHub (make sure there are not any changes on GitHub that you need because they will be overwritten).:
    git push origin main --force

Once these steps are complete, you can continue with the steps in Step-by-step Instructions to stage your new package recipe using your existing staged-recipes fork.

The recipe (recipe.yaml or meta.yaml)

The recipe file is at the heart of every conda package. It is located in the recipe directory, and is named either:

  • recipe.yaml for the newer v1 recipes
  • meta.yaml for the older v0 recipes

It defines everything that is required to build and use the package.

Both files use YAML syntax, augmented with Jinja templating.

The full reference of the formats can be found in the following sources:

In the following, we highlight particularly important and conda-forge specific information and guidelines, ordered by the section in the YAML file. Whenever the syntax between v0 and v1 recipes differs, examples for both versions are provided. Otherwise, the provided snippet works for both recipe versions.

Source

Build from tarballs, not repos

Packages should be built from tarballs using the url key, not from repositories directly by using e.g. git_url.

There are several reasons behind this rule:

  • Repositories are usually larger than tarballs, draining shared CI time and bandwidth
  • Repositories are not checksummed. Thus, using a tarball has a stronger guarantee that the download that is obtained to build from is in fact the intended package.
  • On some systems, it is possible to not have permission to remove a repo once it is created.

Populating the hash field

If your package is on PyPi, you can get the sha256 hash from your package's page on PyPI; look for the SHA256 link next to the download link on your package's files page, e.g. https://pypi.org/project/<your-project>/#files.

You can also generate a hash from the command line on Linux (and Mac if you install the necessary tools below).

To generate the sha256 hash: openssl sha256 your_sdist.tar.gz

You may need the openssl package, available on conda-forge conda install openssl -c conda-forge.

tip

Be sure not to checksum the redirection page. Therefore use, for example,:

curl -sL https://github.com/username/reponame/archive/vX.X.X.tar.gz | openssl sha256

Downloading extra sources and data files

conda-build (v3 and above) and rattler-build support multiple sources per recipe. Examples are available in the rattler-build and conda-build docs.

Build

Skipping builds

Use the skip key in the build section. In v1 recipes, it accepts any of:

  • a boolean value, such as true to skip the build unconditionally;
  • a selector expression to skip the build when it evaluates to true;
  • a list of selector expressions, in which case the build is skipped if any of the expressions evaluates to true.

In v0 recipes, only a boolean value is directly accepted, and it needs to be combined with a selector expression via templating. Different selectors can either be combined using or clauses or multiple skip: true can be used.

You can e.g. specify not to build …

  • on specific architectures:

    build:
    skip:
    - win
    - osx-arm64
  • for specific python versions:

    build:
    skip: match(python, "<3.12)

A full description of selectors is in the rattler-build and conda-build docs.

Optional: build scripts

In many cases, explicit build scripts are not required. Pure Python packages almost never need them.

The default names for the build scripts are

  • build.sh for Unix
  • build.bat for Windows, in v1 recipes
  • bld.bat for Windows, in v0 recipes

If the build can be executed with one line, you may put this line in the script entry of the build section of the recipe file with: script: "{{ PYTHON }} -m pip install . -vv".

Remember to always add pip to the host requirements.

Use pip

Normally Python packages should use this line:

build:
script: "${{ PYTHON }} -m pip install . -vv"

as the installation script in the recipe file or the build script files, while adding pip to the host requirements:

requirements:
host:
- pip

These options should be used to ensure a clean installation of the package without its dependencies. This helps make sure that we're only including this package, and not accidentally bringing any dependencies along into the conda package.

Usually pure-Python packages only require python, setuptools and pip as host requirements; the real package dependencies are only run requirements.

Requirements

Build, host and run

Three primary kinds of dependencies are recognized. In the following paragraphs, we give a very short overview what packages go where. For a detailed explanation please refer to the rattler-build or conda-build documentation, respectively.

Build

Build dependencies are required in the build environment and contain all tools that are not needed on the host of the package.

Following packages are examples of typical build dependencies:

Host

Host dependencies are required during build phase, but in contrast to build packages they have to be present on the host.

Following packages are typical examples for host dependencies:

  • shared libraries (c/c++)
  • python/r libraries that link against c libraries (see e.g. Building Against NumPy)
  • python, r-base
  • setuptools, pip (see Use pip)
Run

Run dependencies are only required during run time of the package. Run dependencies typically include

  • most python/r libraries

Avoid external dependencies

As a general rule: all dependencies have to be packaged by conda-forge as well. This is necessary to assure ABI compatibility for all our packages.

There are only a few exceptions to this rule:

  1. Some dependencies have to be satisfied with CDT packages (see Core Dependency Tree Packages (CDTs)).
  2. Some packages require root access (e.g. device drivers) that cannot be distributed by conda-forge. These dependencies should be avoided whenever possible.

Pinning

Linking shared c/c++ libraries creates dependence on the ABI of the library that was used at build time on the package. The exposed interface changes when previously existing exposed symbols are deleted or modified in a newer version.

It is therefore crucial to ensure that only library versions with a compatible ABI are used after linking.

In the best case, the shared library you depend on:

In these cases you do not have to worry about version requirements:

requirements:
# [...]
host:
- readline
- libpng

In other cases you have to specify ABI compatible versions manually.

requirements:
# [...]
host:
- libawesome 1.1.*

For more information on pinning, please refer to Pinned dependencies.

Constraining packages at runtime

The run_constrained section allows defining restrictions on packages at runtime without depending on the package. It can be used to restrict allowed versions of optional dependencies and defining incompatible packages.

Defining non-dependency restrictions

Imagine a package can be used together with version 1 of awesome-software when present, but does not strictly depend on it. Therefore you would like to let the user choose whether he/she would like to use the package with or without awesome-software. Let's assume further that the package is incompatible to version 2 of awesome-software.

In this case run constraints can be used to restrict awesome-software to version 1.*, if the user chooses to install it. In v1 recipes, the key run_constraints is used, whereas in v0 recipes it's run_contrained:

requirements:
# [...]
run_constraints:
- awesome-software 1.*

Here run_constrained acts as a means to protect users from incompatible versions without introducing an unwanted dependency.

Defining conflicts

Sometimes packages interfere with each other and therefore only one of them can be installed at any time. In combination with an unsatisfiable version, run constraints can define blockers:

package:
name: awesome-db

requirements:
# [...]
run_constraints:
- amazing-db <0.0a0

In this example, awesome-db cannot be installed together with amazing-db as the constraint amazing-db <0.0a0 is impossible to satisfy.

Test

All recipes need tests. Here are some tips, tricks, and justifications. How you should test depends on the type of package (python, c-lib, command-line tool, … ), and what tests are available for that package. But every conda package must have at least some tests.

Simple existence tests

Sometimes defining tests seems to be hard, e.g. due to:

  • tests for the underlying code base may not exist.
  • test suites may take too long to run on limited CI infrastructure.
  • tests may take too much bandwidth.

In these cases, conda-forge may not be able to execute the prescribed test suite.

However, this is no reason for the recipe to not have tests. At the very least, we want to verify that the package has installed the desired files in the desired locations. This is called existence testing.

In v1 recipes, existence testing can be accomplished package_contents tests. In v0 recipes, it needs to be done manually in the commands subsection of the test section.

On POSIX systems, use the test utility and the $PREFIX variable. On Windows, use the exist command. See below for an example.

Simple existence testing example:

tests:
- package_contents:
lib:
# $PREFIX/lib/libboost_log.so on Linux
# $PREFIX/lib/libboost_log.dylib on macOS
# %LIBRARY_BIN%/boost_log.dll and %LIBRARY_LIB%/boost_log.lib on Windows
- boost_log

Testing Python packages

For the best information about testing, see the test sectios in the rattler-build or conda-build docs.

The recommended minimum of testing for Python packages involves the following tests:

  • testing whether one or more installed Python modules can be imported correctly,
  • running pip check to ensure that all dependencies specified in the Python metadata are met.

In v1 recipes, this is accomplished using a python test which combines both imports testing and an automatic pip check test. In v0 recipes, a combination of imports test with an explicit command test needs to be used:

tests:
- python:
imports:
- package_name

Note that package_name is the name imported by Python; not necessarily the name of the conda package (they are sometimes different), or even the wheel name.

Testing for an import will catch the bulk of the packaging errors, generally including the presence of dependencies. However, it does not assure that the package works correctly. In particular, it doesn't test if it works correctly with the versions of dependencies used. In some cases, the top level import name does not contain any executable code (e.g. a package with an empty __init__.py, or without any direct imports). This test would always pass! In these cases, it helps to add more imports explicitly targeting modules that do contain executable code (e.g. package_name.core).

It is good to run some other tests of the code itself (the test suite) if possible.

note

pip check can sometimes fail due to metadata discrepancies between PyPI and conda-forge (e.g. same package with different names). In these cases, the reviewer must evaluate whether the error was a false negative. Tip: use pip list to show what pip check "sees".

Running unit tests

The trick here is that there are multiple ways to run unit tests in Python, including pytest, etc.

Also, some packages install the tests with the package, and thus they can be run in place, while others keep the tests with the source code, and thus can not be run straight from an installed package.

Test requirements

Sometimes there are packages required to run the tests that are not required to simply use the package. This is usually a test-running framework, such as pytest. You can ensure that it is included by adding it to requirements in the test stanza:

tests:
- python:
imports:
- package_name
- requirements:
run:
- pytest
script:
- ... # details below
Copying test files

Often test files are not installed alongside packages. Conda creates a fresh working copy to execute the test stage of build recipes, which don't contain the files of source package.

You can include files required for testing with the files/source section in v1 recipes, or source_files section in v0 recipes:

tests:
- python:
imports:
- package_name
- requirements:
run:
- pytest
files:
source:
- tests/
- test_pkg_integration.py
script:
- pytest tests/
- pytest test_pkg_integration.py

They work for files and directories.

Built-in tests

Some packages have testing built-in. In this case, you can put a test command directly in the test stanza:

tests:
...
- script:
- python -c "import package_name; package_name.tests.runall()"
Custom test script

Alternatively, a custom test script can be created and placed in the recipe directory. This allows an arbitrarily complicated test script. For v1 recipes, the script is listed via a script test with a file key:

tests:
- script:
file: run_test.py

For v0 recipes, the file is always called run_test.py, and it is used automatically when present. However, note that it will override the test: section of the recipe entirely and cause it to be silently ignored.

pytest tests

If the tests are installed with the package, pytest can find and run them for you with the following command:

tests:
- requirements:
run:
- pytest
script:
- pytest --pyargs package_name

Command Line Utilities

If a Python package installs command line utilities, you probably want to test that they were properly installed:

tests:
- script:
- util_1 --help

If the utility actually has a test mode, great. Otherwise simply invoking --help or --version or something will at least test that it is installed and can run.

Testing R packages

R packages should be tested for successful library loading. All recipes for CRAN packages should begin from conda_r_skeleton_helper and will automatically include library loading tests. However, many R packages also include testthat tests that can potentially be run. While optional, additional testing is encouraged when packages:

  • provide interaces to other (compiled) libraries (e.g., r-curl, r-xml2)
  • extend functionality of or integrate many other R libraries (e.g., r-vetiver)
  • are cornerstone R packages that provide often-used functions (e.g., r-rmarkdown)
Testing R library loading

The minimal test of an R package should ensure that the delivered library can be successfully imported. This is accomplished with:

tests:
- r:
libraries:
- PackageName

Note that PackageName is the name imported by R; not necessarily the name of the conda package (e.g., r-matrix delivers Matrix).

Running testthat tests

A typical test section for an R package with testthat testing will look like

tests:
- r:
libraries:
- PackageName
- files:
source:
- tests/
requirements:
run:
- r-testthat
script:
- R -e "testthat::test_file('tests/testthat.R', stop_on_failure=TRUE)"
note

We recommend including a library loading check before the testthat tests.

First, one needs to declare that the test environment have r-testthat installed. One may need additional requirements here, especially if a package has optional functionality that is tested.

note

If any testthat tests fail due to missing packages, maintainers are encouraged to communicate this to the upstream repository. Some R packages have optional functionality that usually involves packages listed under the Suggests: section of the DESCRIPTION file. Developers should be using testthat::skip_if_not_installed() functions to guard against test failures when optional packages are not installed. Posting an Issue or Pull Request when this is not done will help improve testing practices in the R ecosystem.

Second, one needs to declare where to source the tests. R package tests will be found in the tests/ directory of the tarball. This will typically include a tests/testthat.R file and additional tests under tests/testthat/test_*.R. Auxiliary directories and files may also be present and needed for specific tests.

The default R build procedure on conda-forge will not include the tests/ directory in the final build. While it is possible to do this (via an --install-tests flag), it is preferable to use the files.source or source_files keys (for recipe.yaml or meta.yaml, respectively) to copy the tests for the testing phase only.

Finally, one uses the testthat::test_file() function to test the tests/testthat.R file, which for most packages serves as the main entry point for all the other tests. By default, this function does not return an error value on test failures, so one needs to pass the argument stop_on_failure=TRUE to ensure that test failures propagate to conda-build.

There are scenarios where the tests/testthat.R file does not orchestrate the individual tests. In that case, one can instead test the tests/testthat directory with

tests:
- script:
- R -e "testthat::test_dir('tests/testthat/', package='PackageName', load_package='installed', stop_on_failure=TRUE)"

In this case, the function will error on any failures by default. Again, the PackageName here refers to the R library name.

Tests outside of the package

Note that conda-build runs the tests in an isolated environment after installing the package – thus, at this point it does not have access to the original source tarball. This is to ensure that the test environment is as close as possible to what an end-user will see.

This makes it very hard to run tests that are not installed with the package.

Running tests locally for staged recipes

If you want to run and build packages in the staged-recipes repository locally, go to the root repository directory and run the build-locally.py script (you need Python 3). And then you could follow the prompt to select the variant you'd like to build. This requires that you have Docker installed on your machine if you are building a package for Linux. For MacOS, it will prompt you to select a location for the SDK (e.g. export OSX_SDK_DIR=/opt) to be downloaded.

$ cd ~/staged-recipes
$ python build-locally.py

If you know which image you want to build, you can specify it as an argument to the script.

$ cd ~/staged-recipes
$ python build-locally.py <VARIANT>

where <VARIANT> is one of the file names in the .ci_support/ directory, e.g. linux64, osx64, and linux64_cuda<version>.

About

Packaging the license manually

Sometimes upstream maintainers do not include a license file in their tarball despite being demanded by the license.

If this is the case, you can add the license to the recipe directory (here named LICENSE.txt) and reference it inside the recipe file:

about:
license_file: LICENSE.txt

In this case, please also notify the upstream developers that the license file is missing.

Important

The license should only be shipped along with the recipe if there is no license file in the downloaded archive. If there is a license file in the archive, please set license_file to the path of the license file in the archive.

SPDX Identifiers and Expressions

For the about: license entry in the recipe file, using a SPDX identifier or expression is recommended.

See SPDX license identifiers for the licenses. See SPDX license exceptions for license exceptions. See SPDX specification Annex D for the specification on expressions. Some examples of these are:

Apache-2.0
Apache-2.0 WITH LLVM-exception
BSD-3-Clause
BSD-3-Clause OR MIT
GPL-2.0-or-later
LGPL-2.0-only OR GPL-2.0-only
LicenseRef-HDF5
MIT
MIT AND BSD-2-Clause
PSF-2.0
Unlicense

Licenses of included dependencies

For some languages (Go, rust, etc.), the current policy is to include all dependencies and their dependencies in the package. This presents a problem when packaging the license files as each dependency needs to have its license file included in the recipe.

For some languages, the community provides tools which can automate this process, enabling the automatic inclusion of all needed license files.

Rust

cargo-bundle-licenses can be included in the build process of a package and will automatically collect and add the license files of all dependencies of a package.

For a detailed description, please visit the project page but a short example can be found below.

First, include the collection of licenses as a step of the build process.

build:
number: 0
script:
- cargo-bundle-licenses --format yaml --output THIRDPARTY.yml
- build_command_goes_here

Then, include the tool as a build time dependency.

requirements:
build:
- cargo-bundle-licenses

Finally, make sure that the generated file is included in the recipe.

about:
license_file:
- THIRDPARTY.yml
- package_license.txt
Go

go-licenses can be included in the build process of a package and will automatically collect and add the license files of all dependencies of a package.

For a detailed description, please visit the project page but a short example can be found below.

First, include the collection of licenses as a step of the build process.

build:
number: 0
script:
- go build [...]
- go-licenses save . --save_path="./license-files/"

Then, include the tool as a build time dependency.

requirements:
build:
- ${{ compiler('go') }}
- go-licenses

Finally, make sure that the generated file is included in the recipe.

about:
license_file:
- LICENSE
- license-files/
Important

We are not lawyers and cannot guarantee that the above advice is correct or that the tools are able to find all license files. Additionally, we are unable to accept any responsibility or liability. It is always your responsibility to double-check that all licenses are included and verify that any generated output is correct.

note

The correct and automated packaging of dependency licenses is an ongoing discussion. Please feel free to add your thoughts.

Extra

Recipe Maintainer

A maintainer is an individual who is responsible for maintaining and updating one or more feedstock repositories and packages as well as their future versions. They have push access to the feedstock repositories of only the packages they maintain and can merge pull requests into it.

Contributing a recipe for package makes you the maintainer of that package automatically. See Maintainer role and Maintaining packages to learn more about what are the things that maintainers do. If you wish to be a maintainer of a certain package, you should contact current maintainers and open an issue in that package's feedstock with the following command:

@conda-forge-admin, please add user @username

where username is the GitHub username of the new maintainer to be added. Please refer to Becoming a maintainer and Updating the maintainer list for detailed instructions.

Feedstock name

If you want the name of the feedstock to be different from the package name in the staged-recipes, you can use the feedstock-name directive in the recipe of that package, like this:

extra:
feedstock-name: <name>

Here, <name> is the name you would want for the feedstock. If not specified, the name will be taken from the top-level name field in the recipe file.

Miscellaneous

Activate scripts

Recipes are allowed to have activate scripts, which will be sourced or called when the environment is activated. It is generally recommended to avoid using activate scripts when another option is possible because people do not always activate environments the expected way and these packages may then misbehave.

When using them in a recipe, feel free to name them activate.bat, activate.sh, deactivate.bat, and deactivate.sh in the recipe. The installed scripts are recommended to be prefixed by the package name and a separating -. Below is some sample code for Unix and Windows that will make this install process easier. Please feel free to lift it.

# Copy the [de]activate scripts to $PREFIX/etc/conda/[de]activate.d.
# This will allow them to be run on environment activation.
for CHANGE in "activate" "deactivate"
do
mkdir -p "${PREFIX}/etc/conda/${CHANGE}.d"
cp "${RECIPE_DIR}/${CHANGE}.sh" "${PREFIX}/etc/conda/${CHANGE}.d/${PKG_NAME}_${CHANGE}.sh"
done

Jinja templating

The recipe file can contain expressions that are evaluated during build time. These expressions are written in Jinja syntax.

Jinja expressions serve following purposes in the recipe file:

  • They allow defining variables to avoid code duplication. Using a variable for the version allows changing the version only once with every update.

    context:
    version: "3.7.3"

    package:
    name: python
    version: ${{ version }}

    source:
    url: https://www.python.org/ftp/python/${{ version }}/Python-${{ version }}.tar.xz
    sha256: da60b54064d4cfcd9c26576f6df2690e62085123826cff2e667e72a91952d318
  • They can call rattler-build or conda-build functions for automatic code generation. Examples are the compilers or the pin_compatible function.

    requirements:
    build:
    - ${{ compiler('c') }}
    - ${{ compiler('cxx') }}
    - ${{ stdlib('c') }}

    or

    requirements:
    build:
    - ${{ compiler('c') }}
    - ${{ compiler('cxx') }}
    - ${{ stdlib('c') }}
    host:
    - python
    - numpy
    run:
    - python

For more information please refer to the "Templating with Jinja" section in the rattler-build or conda-build docs.