Skip to content

startup.sh issues of WebWolf - cannot connect to the WebGoat DB #1079

@fravoss

Description

@fravoss

I am running the goatandwolf:8.2.2 container image in my Kubernetes cluster, and have noticed two issues the prevent WebWolf from starting up:

(1) there is a race condition in start.sh between WebWolf starting up in the second JVM and attempting to connect to the WebGoat database in the first JVM, vs. the WebGoat startup up in the first JVM and bringing the database online. The startup.sh in webgoat/goatandwolf:8.2.2 has a sleep 10 inbetween the two java calls, and my webgoat.log shows the DB to come online only after ~15 seconds. Could the timeout be longer (or configurable via an env-var), or should there be a check for the successful startup of the first JVM (grep for "Started StartWebGoat" in webgoat.log) before start.sh starts the WebWolf JVM?

(2) unlike in the two-image build of webgoat:8.1 where I could pass --spring.datasource.url in the pod startup config, I am unable to set the database connection string correctly. I found in the all-in-one code, that the connection string is set based on the WEBGOAT_SERVER environment variable. However, that creates a conflict for a container - I had to set WEBGOAT_SERVER to the FQDN by which my cluster can be reached from the browser, whereas I do not want to expose the database port to the outside of the container.

I hacked my way through the startup by exec'ing into the pod after the WebGoat JVM started up, and after the WebWolf JVM terminated because of the race condition. I then set WEBGOAT_SERVER to localhost (that's where my db runs in the pod), and then running manually the java startup call of WebWolf. I am considering to hack this logic in a custom start.sh, but could there be a different env-var for the WebGoat DB host to avoid such hacks?

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions