Error installing zfit with pip under bleeding edge stack

I am using the "Bleeding Edge” stack and I am trying to install the zfit python’s package which depends on TensorFlow (Scalable pythonic fitting — zfit 0.6.6.dev75+g726302ed documentation).

But when I do pip install --user zfit, I get the following pip’s dependancy error :

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

torch 1.7.0a0 requires dataclasses, which is not installed. 
virtualenv 20.4.3 requires distlib<1,>=0.3.1, but you have distlib 0.2.9 which is incompatible. tensorflow-cpu 2.3.0 requires gast==0.3.3, but you have gast 0.4.0 which is incompatible. 
tensorflow-cpu 2.3.0 requires h5py<2.11.0,>=2.10.0, but you have h5py 3.1.0 which is incompatible. tensorflow-cpu 2.3.0 requires numpy<1.19.0,>=1.16.0, but you have numpy 1.19.5 which is incompatible. 
tensorflow-cpu 2.3.0 requires scipy==1.4.1, but you have scipy 1.5.1 which is incompatible. 
tensorflow-cpu 2.3.0 requires tensorflow-estimator<2.4.0,>=2.3.0, but you have tensorflow-estimator 2.5.0 which is incompatible. 
astroid 2.3.3 requires wrapt==1.11.*, but you have wrapt 1.12.1 which is incompatible. 
archspec 0.1.2 requires click<8.0,>=7.1.2, but you have click 7.0 which is incompatible.

What should I do to be able to install zfit ?

Thanks !

Marie Hartmann

Hi,

Can you add also --upgrade to the pip command?

Hi !

When I use the pip install --user --upgrade zfit command I still get a pip’s dependancy resolver error :

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torch 1.7.0a0 requires dataclasses, which is not installed.
virtualenv 20.4.3 requires distlib<1,>=0.3.1, but you have distlib 0.2.9 which is incompatible.
tensorflow-cpu 2.3.0 requires gast==0.3.3, but you have gast 0.4.0 which isincompatible.
tensorflow-cpu 2.3.0 requires h5py<2.11.0,>=2.10.0, but you have h5py 3.1.0which is incompatible.
tensorflow-cpu 2.3.0 requires numpy<1.19.0,>=1.16.0, but you have numpy 1.19.5 which is incompatible.
tensorflow-cpu 2.3.0 requires scipy==1.4.1, but you have scipy 1.5.1 which is incompatible.
tensorflow-cpu 2.3.0 requires tensorflow-estimator<2.4.0,>=2.3.0, but you have tensorflow-estimator 2.5.0 which is incompatible.
astroid 2.3.3 requires wrapt==1.11.*, but you have wrapt 1.12.1 which is incompatible.
archspec 0.1.2 requires click<8.0,>=7.1.2, but you have click 7.0 which is incompatible.

I also tried to only upgrade tensorflow using the same pip command but I ended up with a similar pip’s dependancy resolver error.

Hello,

This looks like a problem with pip not being able to figure out how to install zfit without having a conflict with what’s on the LCG release.

We are now working to offer the user the possibility to define her own environments in SWAN (e.g. conda), but this is not yet in production.

At this moment, you could try:

  • Starting your session with a different LCG release (e.g. 98 instead of 99) and trying the command again. Perhaps the package versions on that release allow you to successfully install.
  • These are some instructions on how to setup a conda environment right now in SWAN, contributed by a user: Installing custom Jupyter kernels at SWAN startup . But this is still a hack, since we want to provide an integrated solution in SWAN as I explained above.

Hello,

Is the hack still the only way to setup a conda environment or has the possibility to define one’s own environments in SWAN put in production? I ran into the same error and found this solution while I was about to ask about the error but wanted to know if the recommended way to proceed was still the same.

Thanks a lot for your help!

Xavier

Hello,

Yes, there is still no way to define conda environments in SWAN other than what the link above suggests as recipe. But this is a topic we would like to address in the SWAN team, most probably after the migration to JupyterLab is completed.

Another ingredient here is the fact that we are working on integrating SWAN with Binder. In practice, this means users would have the possibility to choose their own image. That would also be a way to set up a custom environment – although admittedly more involved, since a container image would need to be created by the user.

Hi,

Thanks a lot for the prompt feedback! I will then follow the recipe but looking forward to the new features you are mentioning. :slight_smile:

:tada:

Are you sure? Binder uses repo2docker, which supports conda environment.yaml files out-of-the-box. Sure, under the hood this means building a container, but from a user perspective, you don’t need to care. I assume this will be part of the SWAN offering too, no?

I assume this will be part of the SWAN offering too, no?

Yes that is correct, that shortcut will be provided too. Thanks for pointing it out!

1 Like