Skip to content

Is there any way to deploy via docker-compose? #329

@ndarilek

Description

@ndarilek

Found Concourse on the recent Hackernews thread so I went straight to the binary release referenced therein. I don't have a bunch of spare VM capacity, but what I do have is all managed via Docker so I really don't want to use another deployment system, cool though it may be.

I'm trying to create a Docker/Compose-based setup. Thus far I have this in my Dockerfile:

FROM debian:latest
ADD https://github.com/concourse/concourse/releases/download/v0.76.0/concourse_linux_amd64 /usr/local/bin/concourse
RUN chmod +x /usr/local/bin/concourse
ENTRYPOINT ["/usr/local/bin/concourse"]

And this in docker-compose.yml:

postgres:
  image: postgres:latest
  environment:
    POSTGRES_PASSWORD: password

tsa:
  build: .
  links:
    - postgres
  ports:
    - 8080:8080
  volumes:
    - .:/var/lib/concourse
  command: web --tsa-host-key /var/lib/concourse/host_key --tsa-authorized-keys /var/lib/concourse/authorized_worker_keys --session-signing-key /var/lib/concourse/session_signing_key --basic-auth-username user --basic-auth-password password --postgres-data-source postgres://postgres:password@postgres/postgres?sslmode=disable --external-url http://localhost:8080

worker:
  build: .
  links:
    - tsa
  volumes:
    - .:/var/lib/concourse
    - /var/run/docker.sock:/var/run/docker.sock
  privileged: true
  command: worker --tsa-public-key /var/lib/concourse/host_key.pub --tsa-worker-private-key /var/lib/concourse/worker_key  --tsa-host tsa --work-dir /srv

My hope was to give the worker container all privileges and bindmount the Docker socket on the host. I'm currently doing something similar for Jenkins, though it's much more complex than needs dictate since almost everything I build is Docker-based.

Unfortunately, I'm hitting the following error when running docker-compose up. I don't understand the bit about not getting a whole bunch of graph drivers. docker info claims my host is using the BTRFS driver. Seems like it should just defer to the host if I'm bindmounting the host's socket in. It would be nice if I could spin up builder VMs/servers with lots of CPU/RAM and simply delegate Concourse' Docker capability detection to those hosts:

Starting concourse_postgres_1
Starting concourse_tsa_1
Starting concourse_worker_1
Attaching to concourse_postgres_1, concourse_tsa_1, concourse_worker_1
�[36mpostgres_1 | �[0mLOG:  database system was shut down at 2016-03-27 01:59:03 UTC
�[36mpostgres_1 | �[0mLOG:  MultiXact member wraparound protections are now enabled
�[36mpostgres_1 | �[0mLOG:  database system is ready to accept connections
�[36mpostgres_1 | �[0mLOG:  autovacuum launcher started
�[33mtsa_1      | �[0m{"timestamp":"1459044271.055514812","source":"atc","message":"atc.db.migrations.migration-lock-acquired","log_level":1,"data":{"session":"1"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044271.416863203","source":"tsa","message":"tsa.listening","log_level":1,"data":{}}
�[33mtsa_1      | �[0m{"timestamp":"1459044271.417348623","source":"atc","message":"atc.listening","log_level":1,"data":{"debug":"127.0.0.1:8079","web":"0.0.0.0:8080"}}
�[32mworker_1   | �[0m{"timestamp":"1459044286.023603678","source":"garden-linux","message":"garden-linux.failed-to-parse-pool-state","log_level":2,"data":{"error":"openning state file: open /srv/linux/state/port_pool.json: no such file or directory"}}
�[32mworker_1   | �[0m{"timestamp":"1459044286.024022102","source":"garden-linux","message":"garden-linux.unsupported-graph-driver","log_level":1,"data":{"name":"vfs"}}
�[32mworker_1   | �[0mtime="2016-03-27T02:04:46Z" level=error msg="Failed to GetDriver graph btrfs /srv/linux/graph" 
�[32mworker_1   | �[0mtime="2016-03-27T02:04:46Z" level=error msg="Failed to GetDriver graph zfs /srv/linux/graph" 
�[32mworker_1   | �[0mtime="2016-03-27T02:04:46Z" level=error msg="Failed to GetDriver graph devicemapper /srv/linux/graph" 
�[32mworker_1   | �[0mtime="2016-03-27T02:04:46Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded." 
�[32mworker_1   | �[0m{"timestamp":"1459044286.024624586","source":"garden-linux","message":"garden-linux.retain.starting","log_level":1,"data":{"session":"10"}}
�[32mworker_1   | �[0m{"timestamp":"1459044286.024943590","source":"garden-linux","message":"garden-linux.retain.retained","log_level":1,"data":{"session":"10"}}
�[32mworker_1   | �[0m{"timestamp":"1459044287.595874548","source":"baggageclaim","message":"baggageclaim.listening","log_level":1,"data":{"addr":"0.0.0.0:7788"}}
�[32mworker_1   | �[0m{"timestamp":"1459044287.864905119","source":"garden-linux","message":"garden-linux.failed-to-set-up-backend","log_level":3,"data":{"error":"exit status 127","trace":"goroutine 1 [running, locked to thread]:\ngithub.com/pivotal-golang/lager.(*logger).Fatal(0xc82000ec60, 0xc1d160, 0x18, 0x7f7ee3934328, 0xc82021a780, 0x0, 0x0, 0x0)\n\t/tmp/build/9674af12/garden-linux-release/src/github.com/pivotal-golang/lager/logger.go:131 +0xc5\nmain.main()\n\t/tmp/build/9674af12/garden-linux-release/src/github.com/cloudfoundry-incubator/garden-linux/main.go:507 +0x3b14\n"}}
�[32mworker_1   | �[0mpanic: exit status 127
�[32mworker_1   | �[0m
�[32mworker_1   | �[0mgoroutine 1 [running, locked to thread]:
�[32mworker_1   | �[0mpanic(0xb57560, 0xc82021a780)
�[32mworker_1   | �[0m  /usr/local/go/src/runtime/panic.go:464 +0x3e6
�[32mworker_1   | �[0mgithub.com/pivotal-golang/lager.(*logger).Fatal(0xc82000ec60, 0xc1d160, 0x18, 0x7f7ee3934328, 0xc82021a780, 0x0, 0x0, 0x0)
�[32mworker_1   | �[0m  /tmp/build/9674af12/garden-linux-release/src/github.com/pivotal-golang/lager/logger.go:152 +0x698
�[32mworker_1   | �[0mmain.main()
�[32mworker_1   | �[0m  /tmp/build/9674af12/garden-linux-release/src/github.com/cloudfoundry-incubator/garden-linux/main.go:507 +0x3b14
�[33mtsa_1      | �[0m{"timestamp":"1459044287.916075468","source":"tsa","message":"tsa.connection.keepalive","log_level":1,"data":{"session":"1","type":"keepalive"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.916302443","source":"tsa","message":"tsa.connection.channel-request","log_level":1,"data":{"session":"1","type":"exec"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.916809559","source":"tsa","message":"tsa.connection.tcpip-forward.forwarding-tcpip","log_level":1,"data":{"requested-bind-addr":"0.0.0.0:7777","session":"1.2"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.916914940","source":"tsa","message":"tsa.connection.forward-worker.forwarded-tcpip","log_level":1,"data":{"bound-port":37455,"session":"1.1"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.917042255","source":"tsa","message":"tsa.connection.tcpip-forward.forwarding-tcpip","log_level":1,"data":{"requested-bind-addr":"0.0.0.0:7788","session":"1.3"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.917137384","source":"tsa","message":"tsa.connection.forward-worker.forwarded-tcpip","log_level":1,"data":{"bound-port":38986,"session":"1.1"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.917503834","source":"tsa","message":"tsa.connection.forward-worker.register.start","log_level":1,"data":{"session":"1.1.4","worker-address":"127.0.0.1:37455","worker-platform":"linux","worker-tags":""}}
�[32mworker_1   | �[0m{"timestamp":"1459044287.917503834","source":"tsa","message":"tsa.connection.forward-worker.register.start","log_level":1,"data":{"session":"1.1.4","worker-address":"127.0.0.1:37455","worker-platform":"linux","worker-tags":""}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918128490","source":"tsa","message":"tsa.connection.tcpip-forward.failed-to-open-channel","log_level":2,"data":{"error":"ssh: unexpected packet in response to channel open: \u003cnil\u003e","session":"1.2"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918167353","source":"tsa","message":"tsa.connection.forward-worker.connection-closed","log_level":2,"data":{"error":"EOF","session":"1.1"}}
�[32mworker_1   | �[0mExit trace for group:
�[32mworker_1   | �[0mgarden exited with error: exit status 2
�[32mworker_1   | �[0mbaggageclaim exited with nil
�[32mworker_1   | �[0mbeacon exited with nil
�[32mworker_1   | �[0m
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918196917","source":"tsa","message":"tsa.connection.cleanup.interrupting","log_level":0,"data":{"session":"1.4"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918217897","source":"tsa","message":"tsa.connection.cleanup.interrupting","log_level":0,"data":{"session":"1.4"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918234348","source":"tsa","message":"tsa.connection.forward-worker.register.failed-to-fetch-containers","log_level":2,"data":{"error":"Get http://api/containers: read tcp 127.0.0.1:43456-\u003e127.0.0.1:37455: read: connection reset by peer","session":"1.1.4"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918259859","source":"tsa","message":"tsa.connection.forward-worker.register.done","log_level":1,"data":{"session":"1.1.4","worker-address":"127.0.0.1:37455","worker-platform":"linux","worker-tags":""}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918238640","source":"tsa","message":"tsa.connection.cleanup.interrupting","log_level":0,"data":{"session":"1.4"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918277979","source":"tsa","message":"tsa.connection.tcpip-forward.failed-to-accept","log_level":2,"data":{"error":"accept tcp [::]:38986: use of closed network connection","session":"1.3"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918318748","source":"tsa","message":"tsa.connection.tcpip-forward.failed-to-accept","log_level":2,"data":{"error":"accept tcp [::]:37455: use of closed network connection","session":"1.2"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918364763","source":"tsa","message":"tsa.connection.cleanup.process-exited-successfully","log_level":0,"data":{"session":"1.4"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918384314","source":"tsa","message":"tsa.connection.cleanup.process-exited-successfully","log_level":0,"data":{"session":"1.4"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044287.918402195","source":"tsa","message":"tsa.connection.cleanup.process-exited-successfully","log_level":0,"data":{"session":"1.4"}}
�[32mconcourse_worker_1 exited with code 1
�[0m�[33mtsa_1      | �[0m{"timestamp":"1459044301.417274714","source":"atc","message":"atc.lost-and-found.lease-invalidate-cache.tick","log_level":1,"data":{"session":"6.1"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044301.426344633","source":"atc","message":"atc.lost-and-found.lease-invalidate-cache.collecting-baggage","log_level":1,"data":{"session":"6.1"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044301.426389933","source":"atc","message":"atc.baggage-collector.collect","log_level":1,"data":{"session":"7"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044331.417405128","source":"atc","message":"atc.lost-and-found.lease-invalidate-cache.tick","log_level":1,"data":{"session":"6.2"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044331.431194782","source":"atc","message":"atc.lost-and-found.lease-invalidate-cache.collecting-baggage","log_level":1,"data":{"session":"6.2"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044331.431253195","source":"atc","message":"atc.baggage-collector.collect","log_level":1,"data":{"session":"7"}}
�[33mtsa_1      | �[0m{"timestamp":"1459044361.417612076","source":"atc","message":"atc.lost-and-found.lease-invalidate-cache.tick","log_level":1,"data":{"session":"6.3"}}
Stopping concourse_tsa_1 ... 
Stopping concourse_postgres_1 ... 
�[2A�[2K
Stopping concourse_tsa_1 ... done
�[2B�[1A�[2K
Stopping concourse_postgres_1 ... done
�[1BGracefully stopping... (press Ctrl+C again to force)

Is there just no way to run this within Docker, even with --privileged and bindmounting the Docker socket from the host? I suppose I could give it its own VM if absolutely needed, but I like the idea of just scaling up worker nodes in containers via my existing Rancher-based setup, which makes that incredibly easy.

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions