Links behind "Try it" buttons for CernBox and Swan

Hi,
we (James Catmore, Maiken Pedersen @ University of Oslo) have been in contact with you on another ticket regarding first test-setup of sciencebox. We got through the first hurdles, however, next problem: we are having issues with the Try-it links on the CernBox and Swan tabs.

It seems that the Try-It button on the CernBox (from here https://science-box.sciencebox.uiocloud.no/) points to whatever is first in /etc/hosts, but the Try-it link on Swan keeps giving the short hostname.

So if the fqdn is
science-box.sciencebox.grid.uiocloud.no
and
/etc/hosts looks like:
science-box.sciencebox.grid.uiocloud.no
and hostname commane returns
science-box.sciencebox.grid.uiocloud.no

Then the Try-it button on the CernBox gives:
https://science-box.sciencebox.uiocloud.no/cernbox/index.php/login
while the Try-it button on Swan tab gives
https://science-box:8443/

The latter is not found.

What are the recommendations for configuration of sciencebox and/or the host machine?

Thanks!
Maiken

Currently

Hi @maikenp

I tried it and in the link for CERNBox It appear to be good, even I was able to login with user0
and the link for swan is pointing me to
https://science-box.sciencebox.uiocloud.no/swan
but I am getting timeout, can you check that you swan container is running?
with

docker ps -a

if you are using uboxed
of

kubectl -n boxed get pods -o wide

if you are using kuboxed.

cheers
Omar.

Hi,

Well, in fact I see no swan in the docker containers.

sudo docker ps
        CONTAINER ID        IMAGE                                                          COMMAND                  CREATED             STATUS              PORTS                                      NAMES
    4337e6e1f496        gitlab-registry.cern.ch/swan/docker-images/jupyterhub:v1.9     "/bin/bash /root/sta…"   16 hours ago        Up 16 hours         0.0.0.0:8443->443/tcp                      jupyterhub
    4f68f4323b61        gitlab-registry.cern.ch/cernbox/boxedhub/ldap:v0.2             "/bin/sh -c '/contai…"   16 hours ago        Up 16 hours         389/tcp, 636/tcp                           ldap
    015a713043e8        gitlab-registry.cern.ch/cernbox/boxedhub/eos-storage:v0.9      "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-fst4
    eeb89de9da72        gitlab-registry.cern.ch/cernbox/boxedhub/cernboxmysql:v1.0     "/bin/bash /root/sta…"   16 hours ago        Up 16 hours         3306/tcp                                   cernboxmysql
    b2eed08f1809        gitlab-registry.cern.ch/cernbox/boxedhub/eos-storage:v0.9      "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-mq
    0f949179ac85        gitlab-registry.cern.ch/cernbox/boxedhub/eos-storage:v0.9      "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-fst1
    073cec1fcb47        gitlab-registry.cern.ch/cernbox/boxedhub/eos-storage:v0.9      "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-mgm
    12dc2147ad5b        gitlab-registry.cern.ch/cernbox/boxedhub/eos-storage:v0.9      "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-fst3
    06a6b38d9d5b        gitlab-registry.cern.ch/cernbox/boxedhub/cernbox:v1.3          "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    cernbox
    0c665da3c97f        gitlab-registry.cern.ch/cernbox/boxedhub/eos-fuse:v0.8         "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-fuse
    66f1b7bb6df3        gitlab-registry.cern.ch/cernbox/boxedhub/cernboxgateway:v1.1   "/bin/bash /root/sta…"   16 hours ago        Up 16 hours         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   cernboxgateway
    1ea87840182a        gitlab-registry.cern.ch/cernbox/boxedhub/eos-storage:v0.9      "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    eos-fst2
    e9bd00c4bf5b        gitlab-registry.cern.ch/cernbox/boxedhub/cvmfs:v0.5            "/bin/bash /root/sta…"   16 hours ago        Up 16 hours                                                    cvmfs

I started the sciencebox as I assume one should with:
SetupHost.sh

But in that step I see no swan:

Run via docker-compose...
Creating cernboxgateway ... done
Creating cvmfs          ... done
Creating eos-fst2       ... done
Creating eos-fst3       ... done
Creating eos-fst1       ... done
Creating eos-mgm        ... done
Creating eos-fuse       ... done
Creating ldap           ... done
Creating cernbox        ... done
Creating eos-mq         ... done
Creating eos-fst4       ... done
Creating cernboxmysql   ... done
Creating jupyterhub     ... done
Creating ldap-add       ... done

However, then I see:

Configuring:
  - Initialization
  - LDAP
  - EOS headnode
  - EOS storage servers
  - CERNBox
  - SWAN

Hi,

All the containers appear to be working, please do the next steps to check the jupyterhub logs.
Lets go inside the container with

docker exec -it  4337e6e1f496 bash

I can see in the docker ps command that the ID for jupyterhub is 4337e6e1f496

and let’s see the log inside the container.

 cat /var/log/jupyterhub/jupyterhub.log

Looks like this:

   [root@4337e6e1f496 /]# cat /var/log/jupyterhub/jupyterhub.log
[I 2019-11-20 18:42:52.216 SWAN app:1673] Using Authenticator: ldapauthenticator.ldapauthenticator.LDAPAuthenticator-1.2.2
[I 2019-11-20 18:42:52.216 SWAN app:1673] Using Spawner: swanspawner.swandockerspawner.SwanDockerSpawner
[D 2019-11-20 18:42:52.217 SWAN app:1625] Could not load pycurl: No module named 'pycurl'
    pycurl is recommended if you have a large number of users.
[D 2019-11-20 18:42:52.219 SWAN app:1050] Generating new cookie_secret
[I 2019-11-20 18:42:52.219 SWAN app:1055] Writing cookie_secret to /srv/jupyterhub/cookie_secret
[D 2019-11-20 18:42:52.220 SWAN app:1071] Connecting to db: sqlite:////srv/jupyterhub/jupyterhub.sqlite
[D 2019-11-20 18:42:52.236 SWAN orm:656] Stamping empty database with alembic revision 896818069c98
[I 2019-11-20 18:42:52.241 alembic.runtime.migration migration:130] Context impl SQLiteImpl.
[I 2019-11-20 18:42:52.241 alembic.runtime.migration migration:137] Will assume non-transactional DDL.
[I 2019-11-20 18:42:52.258 alembic.runtime.migration migration:360] Running stamp_revision  -> 896818069c98
[D 2019-11-20 18:42:52.259 alembic.runtime.migration migration:562] new branch insert 896818069c98
[I 2019-11-20 18:42:52.538 SWAN proxy:431] Generating new CONFIGPROXY_AUTH_TOKEN
[I 2019-11-20 18:42:52.570 SWAN app:1201] Not using whitelist. Any authenticated user will be allowed.
[D 2019-11-20 18:42:52.664 SWAN app:1473] Loading state for dummy_admin from db
[D 2019-11-20 18:42:52.668 SWAN app:1489] Loaded users:
    dummy_admin admin
[I 2019-11-20 18:42:52.678 SWAN app:1855] Hub API listening on http://jupyterhub:8080/hub/
[W 2019-11-20 18:42:52.680 SWAN proxy:565] Running JupyterHub without SSL.  I hope there is SSL termination happening somewhere else...
[I 2019-11-20 18:42:52.680 SWAN proxy:567] Starting proxy @ http://127.0.0.1:8000/
[D 2019-11-20 18:42:52.680 SWAN proxy:568] Proxy cmd: ['/srv/jupyterhub/jh_gitlab/scripts/start_proxy.sh', '--ip', '127.0.0.1', '--port', '8000', '--api-ip', '127.0.0.1', '--api-port', '8001', '--error-target', 'http://jupyterhub:8080/hub/error']
[D 2019-11-20 18:42:52.686 SWAN proxy:517] Writing proxy pid file: jupyterhub-proxy.pid
18:42:52.978 [ConfigProxy] info: Proxying http://*:8000 to (no default)
18:42:52.983 [ConfigProxy] info: Proxy API at http://127.0.0.1:8001/api/routes
[D 2019-11-20 18:42:53.068 SWAN proxy:603] Proxy started and appears to be up
[I 2019-11-20 18:42:53.068 SWAN app:1876] Starting managed service cull-idle
[I 2019-11-20 18:42:53.069 SWAN service:302] Starting service 'cull-idle': ['python3', '/srv/jupyterhub/jh_gitlab/scripts/cull_idle_servers.py', '--cull_every=600', '--timeout=14400', '--local_home=True', '--cull_users=True']
[I 2019-11-20 18:42:53.071 SWAN service:114] Spawning python3 /srv/jupyterhub/jh_gitlab/scripts/cull_idle_servers.py --cull_every=600 --timeout=14400 --local_home=True --cull_users=True
[D 2019-11-20 18:42:53.078 SWAN spawner:851] Polling subprocess every 30s
[D 2019-11-20 18:42:53.082 SWAN proxy:296] Fetching routes to check
[D 2019-11-20 18:42:53.091 SWAN proxy:686] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
18:42:53.106 [ConfigProxy] info: 200 GET /api/routes 
[I 2019-11-20 18:42:53.107 SWAN proxy:301] Checking routes
[I 2019-11-20 18:42:53.107 SWAN proxy:370] Adding default route for Hub: / => http://jupyterhub:8080
[D 2019-11-20 18:42:53.108 SWAN proxy:686] Proxy: Fetching POST http://127.0.0.1:8001/api/routes/
18:42:53.114 [ConfigProxy] info: Adding route / -> http://jupyterhub:8080
18:42:53.115 [ConfigProxy] info: Route added / -> http://jupyterhub:8080
18:42:53.116 [ConfigProxy] info: 201 POST /api/routes/ 
[I 2019-11-20 18:42:53.116 SWAN app:1912] JupyterHub is now running at http://127.0.0.1:8000/
[W 191120 18:42:53 cull_idle_servers:368] Could not load pycurl: No module named 'pycurl'
    pycurl is recommended if you have a large number of users.
[I 2019-11-20 18:42:53.301 SWAN log:158] 200 GET /hub/api/users (cull-idle@172.24.0.13) 28.16ms
[D 2019-11-20 18:47:53.119 SWAN proxy:686] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
18:47:53.131 [ConfigProxy] info: 200 GET /api/routes 
[I 2019-11-20 18:47:53.133 SWAN proxy:301] Checking routes
[D 2019-11-20 18:52:53.118 SWAN proxy:686] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
18:52:53.126 [ConfigProxy] info: 200 GET /api/routes 
[I 2019-11-20 18:52:53.127 SWAN proxy:301] Checking routes
[I 2019-11-20 18:52:53.382 SWAN log:158] 200 GET /hub/api/users (cull-idle@172.24.0.13) 33.34ms
[D 2019-11-20 18:57:53.118 SWAN proxy:686] Proxy: Fetching GET http://127.0.0.1:8001/api/routes
18:57:53.128 [ConfigProxy] info: 200 GET /api/routes 
.... and continues like that

Is the problem the pointing to the wrong port? Tryit points to 8443 - but I see no mention of that port neither in the network option in docker ps output or the logs?

Hi @maikenp

I need to check with my colleagues if that is an issue related to the ports and firewalls,
as soon I have the answer I will be back.

Ok, so I opened up port 8443 on the openstack instance and that actually fixed the issue :slight_smile:

So maybe in the instructions include what ports must be configured. (Or is it there somewhere and I missed it?)

Great! @maikenp

Yes! thank you for the feedback, I will include it the requirements,
I can see that it is working now :slight_smile:

Hello @ozapatam ,

thanks to the help above we got to a working SWAN page. We are having troubles creating new projects or workbooks, though, I think due to some misconfiguration of the local EOS instance, which is unable to write.

If you try to create a new SWAN project, one gets the error message (after entering the name of the project):

Error creating project: Internal Server Error

If instead one goes to the CERNBox tab and navigates to the SWAN_projects directory, it is then possible to create new directories. However, if you try to create a workbook (Python, C++ etc) one gets the error message:

Unexpected error while saving file: SWAN_projects/Untitled.ipynb [Errno 28] No space left on device: '/eos/user/u/user0/SWAN_projects/.~Untitled.ipynb'

The looks to me like we need to do some more configuration steps with the local EOS instance.

Thanks for any further help!

James & Maiken

I will need a hand here from the experts on EOS and CERNBox @dalvesde

Dear all,

happy New Year! We were wondering if there was any news about the above issue with creating the new projects or workbooks on an out-of-CERN instance? I think we concluded it was something to do with the EOS configuration, but we didn’t get beyond that.

Thanks for any further advice!

James & Maiken

Hello James, Maiken,

Sorry for the long time awaited for this issue.

To exclude the obvious, how much space is on the root partition of the machine where ScienceBox is deployed? Now that you have downloaded all the images, please make sure that >20GB is available. I would suggest having at least 50GB for minimal testing.

This said, could you please docker exec into the eos-mgm contianer (docker exec -it eos-mgm bash) and report the output of the following command: eos recycle ?

Thank you!
Enrico

Dear James, Maiken,

Have you been able to check what Enrico suggested? Is there anything else we can help you with?