You cannot mount such remote resources that are on the Internet. What you can do is to have a shell script in your docker image which when executed will download those resources. And make docker run that script when the container is run. Dockerfile FROM ubuntu:latest COPY download-from-github.sh / #...
I would skip Vagrant step. Docker containers are enough in most cases. Ad. 2. Of course you can create separate containers for each instance and app (e.g. one for server app, one for database and one for some queue). You manage dependencies between containers using link command (read here). To...
docker,environment-variables,dockerfile,dokku
I've never used Dokku, but you will only be able to use those variables at run-time in a launched container. Docker intentionally doesn't allow you to pass in variables to docker build, so that builds are reproducible across environments. I suspect you can just write a script that configures the...
Information: Because Docker only runs on Linux you will need to install some kind of virtual instance on your local machine. An easy and popular way to do that is to install Boot2docker and VirtualBox. VirtualBox is a dependency of Boot2docker. You can download, setup and install the latest versions...
docker,google-cloud-platform,gcloud
From the release notes 0.9.65 (2015/06/17): gcloud preview docker moved to gcloud docker. The command was moved out of preview in the latest version of gcloud. You can run it with: gcloud docker push gcr.io/<my-project-xxx>/<my-image-yyy> ...
When using the postgres binary to submit SQL commands (as opposed to psql), the end-of-line is the end of the command and the semi-colon has no meaning. Per documentation: Normally, the single-user mode server treats newline as the command entry terminator; there is no intelligence about semicolons, as there is...
php,docker,phpstorm,xdebug,boot2docker
Your Docker container can't see your PHP Storm IDE with the IP 127.0.0.1, typically the host is 172.17.42.1 from within a container. Also remote_connect_back won't work well probably. Try setting it up like this: xdebug.remote_host=172.17.42.1 xdebug.remote_connect_back=Off You might need to look for a proper way to know the host's IP...
docker,google-cloud-platform,google-container-engine
Try replacing hostDir with hostPath as mentioned in v1beta3-conversion-tips-from-v1beta12. Try replacing volumes: - name: string source: # Either emptyDir for an empty directory # emptyDir: {} # Or hostDir for a pre-existing directory on the host hostDir: path: /home with volumes: - name: string hostPath: path: /home at the bottom...
I have found the solution. Docker offers two networking modes: bridge, and host. The bridge networking mode is default, and when using this mode, MySQL must be listening for connections on the Docker host IP address (172.17.42.1). In other words, the bind-address in the MySQL config file must be set...
The first rule of Docker containers is don't locate your data inside your application container. Data that needs to persist beyond the lifetime of the container should be stored in a Docker "volume", either mounted from a host directory or from a data-only container. If you want to be able...
Every RUN line in a Dockerfile is essentially equivalent to docker runing the previous line's image. By default docker keeps all these intermediate images to help with caching, this speeds up subsequent builds. You can ask them to be removed when building by specifying the --rm or --force-rm flags.
linux,osx,networking,docker,boot2docker
I got this working by using the --net host flag which essentially says that i am going to use to your host's networking and in this case ip addr displays the host's machine networking eth0, eth1 details. docker run -it --net host <image name> /bin/bash ...
elasticsearch,docker,kibana,kibana-4
Can I do that with this image itself? yes, just use Docker volumes to pass in your own config files Let say you have the following files on your docker host: /home/liv2hak/elasticsearch.yml /home/liv2hak/kibana.yml you can then start your container with: docker run -d --name kibana -p 5601:5601 -p 9200:9200...
The best way would be to use VOLUME and mount your source codes to the container. That way, your IDE will have access to code in the host and all the changes you make will also be reflected in the containers.
No you can not do this. The DEA's (part of Cloud Foundry that runs your app) is firewalled off from talking to internal IP's of other DEA's. In Bluemix we do not charge for network bandwidth.
As per the docker hub documentation - Docker itself provides access to Docker Hub services via the docker search, pull, login, and push commands. It does not look like you can do a docker inspect without pulling one image...
python,python-3.x,amazon-web-services,docker
You should install python3-pip in your Dockerfile and then run pip3 install -r requirements.txt
python,docker,pycharm,remote-debugging,boot2docker
Okay so to answer your question(s): In PyCharm, I can set a remote python interpreter via SSH but I'm not sure how to do it for dockers that can only be accessed via Boot2Docker? You need: To ensure that you have SSH running in your container There are many base...
The --iptables option only applies to the Docker daemon; it's not a per-container option. The corollary is that this isn't something you could ever set from your docker-compose.yaml file. You would need to modify the options passed to the Docker daemon; on Red Hat systems and derivatives this means you...
You seem to have named the root of your LDAP DIT as dc=myorga. So an entry that requires ou=users,DC=example.com isn't going to work. You'll have to change that accordingly.
node.js,amazon-web-services,docker
I think the answers regarding the environment variables are good solutions. To offer an alternative, or if you use a file for aws authentication, you could use docker volumes to mount these. Mount a Host Directory as a Data Volume In addition to creating a volume using the -v flag...
google-app-engine,docker,boot2docker
You can SSH into the Container, make changes and then you can commit the container, which will save your changes to an image. You can now push this image to registry. Unless you are using a data container, all your changes are saved.
Your iptables configuration looks a little broken right now, as if you cleared it out at some point without restarting Docker. For example, you have a DOCKER chain available in both the filter and nat tables, but no rules that reference it, so rules placed in that chain will have...
or docker stats $(docker ps | awk '{if(NR>1) print $NF}')
So, it wasn't a problem with either Docker or Elastic. Just to recap, the same script throwning PUT requests at a Elasticsearch setup locally worked, but when throwning at a container with Elasticsearch failed after a few thousand documents (20k). To note that the overal number of documents was roughtly...
docker,boot2docker,docker-machine
After some research about it and interchange with other user over freenode chat room, I found that in current version this option is not available, you can't set env var to disable TLS verification on creation process with docker-machine. Also some people recommend for now use solution presented on deis/issues/2230.
I don't know the exact reason why uname -p fail with the java:7 docker image but it seems to be due to the docker debian image. With the ubuntu docker image, everything is fine. $ docker run debian uname -p unknown $ docker run ubuntu uname -p x86_64 If you...
Sure. Docker is just responding to the error codes returned by the RUN shell scripts in the Dockerfile. If your Dockerfile has something like: RUN make You could replace that with: RUN make; exit 0 This will always return a 0 (success) exit code. The disadvantage here is that your...
I'm not sure about the first problem. It may be that Postgres doesn't like running on top of the UFS. The second problem is just that a container will exit when its main process ends. So the command "service postgres start" runs, starts Postgres in the background then immediately exits...
There was a commit PR which added to the doc: Note: This command (attach) is not for running a new process in a container. See: docker exec. The answer to "Docker. How to get bash\ssh inside runned container (run -d)?" illustrates the difference: (docker >= 1.3) If we use docker...
mysql,linux,database,docker,virtualization
The official mysql image stores data in a volume. Normally this is desired so that your data can persist beyond the life span of your container, but data volumes bypass the Union File System and are not committed to the image. You can accomplish what you're trying to do by...
The postgres:9.4 image you've inherited from declares a volume at /var/lib/postgresql/data. This essentially means you can can't copy any files to that path in your image; the changes will be discarded. You have a few choices: You could just add your own configuration file as a volume at run-time with...
Adrian Mouat recomandation was the answer to my question. Docker started using btrfs after I started like this : /usr/bin/docker -d -g /var/lib/docker2 Thanks you very much....
docker,elastic-beanstalk,aws-cli
I've not given this a shot but I assume there's a --solution-stack-name option where you can pass values such as 64bit Amazon Linux 2015.03 v1.4.1 running Docker 1.6.0. You can alternatively specify the solution stack on json file and specify the json file via --option-settings file://your_options.json where you can include...
When you run wordpress container for the first time, the initialization script downloads the wordpress codebase to /var/www/html and then start the web server. Since everything inside a container is ephemeral, the codebase with any changes you make will be lost when you re-run the container (unless you just stop/start...
elasticsearch,docker,mesos,marathon
Elasticsearch and NFS are not the best of pals ;-). You don't want to run your cluster on NFS, it's much too slow and Elasticsearch works better when the speed of the storage is better. If you introduce the network in this equation you'll get into trouble. I have no...
docker,google-compute-engine,kubernetes,aerospike
An alternative to specifying all mesh seed IP addresses is to use the asinfo tip command. Please see: http://www.aerospike.com/docs/reference/info/#tip the tip command asinfo -v 'tip:host=172.16.121.138;port=3002' The above command could be added to a script or orchestration tool with correct ips. You may also find addtional info on the aerospike Forum:...
docker,google-compute-engine,kubernetes
I think you laid out the issues pretty well. The two kinds of scaling you described are called "vertical scaling" (increasing memory or CPU of a single instance) and "horizontal scaling" (increasing number of instances). On availability: As you observed, you can achieve pretty good availability even with a single...
docker,large-data-volumes,docker-swarm
Can data volume containers be used/mounted in containers residing on other hosts within a docker swarm? Docker, by itself, does not provide any facility for either migrating data or sharing data between hosts in a cluster. How is the performance? is it recommended to structure things this way? Docker...
I figured it must be a firewall problem but it took a while longer before I realised it was the firewall built into Linux. To make 2375 accessable: Use one of the following depending on your distro sudo firewall-cmd --zone=public --add-port=2375/tcp --permanent sudo firewall-cmd --reload OR sudo iptables -I INPUT...
Run the full docker version command, and you should see something like this: $ docker version Client version: 1.6.2 Client API version: 1.18 Go version (client): go1.4.2 Git commit (client): 7c8fca2 OS/Arch (client): darwin/amd64 Server version: 1.6.2 Server API version: 1.18 Go version (server): go1.4.2 Git commit (server): 7c8fca2 OS/Arch...
You need to append daemon off; to your nginx.conf configuration instructing nginx to run in the foreground. Then modify your supervisor stanza to be: [program:nginx] command=nginx autorestart=true It will still spawn master/worker processes/subprocesses and can be used this way in production setups just fine. In this case it's supervisor that...
Now docker only supports amd64 architecture. According to this post, docker developers will eventually provide support for other architectures, https://github.com/docker/docker/issues/136....
You need to run docker login before you can access private repositories.
If found a Solution that let yaourt only download the Info how to build the Package, then invoke makepkg itself, both with an non-root User and afterwards install the build Package with the root User and pacman. The Portion of the Dockerfile looks like this RUN mkdir -p /tmp/Package/ &&...
Boot2Docker is a small Linux VM running on VirtualBox. So before you can use your files (from Windows) in Docker (which is running in this VM), you must first share your code with the Boot2Docker VM itself. To do so, you mount your Windows folder to the VM: C:/Program Files/Oracle/VirtualBox/VBoxManage...
docker,dockerfile,docker-compose
Use the same image you use for running the application to make your data container. For example, if you want to make a postgresql database, use the postgres image for your data container: $ docker run --name dc postgres echo "Data Container" This command created the data container and exited...
1) ENTRYPOINT and CMD are executed in the order they appear in the Dockerfile, regardless of the volumes mount 2) if you have an ENTRYPOINT launching a verb, you can pass a parameter 3) yes for docker run but some examples might clarify this, and docker exec just gets you...
Found where in the Kubernetes docs they mention updates: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/replication-controller.md#rolling-updates. Wish it was more automated, but it works. EDIT 1 For automating this, I've found https://www.npmjs.com/package/node-kubernetes-client which since I already automate the rest of my build and deployment with a node process, this will work really well....
git,docker,boot2docker,git-clone
The color you see in red, is not caused by error. It's because git clone doesn't use STDOUT, we can verity it by following command: $ git clone --progress --verbose https://github.com/zsh-users/antigen > /dev/null Cloning into 'antigen'... POST git-upload-pack (255 bytes) remote: Counting objects: 1291, done. remote: Total 1291 (delta 0),...
ruby-on-rails,docker,boot2docker
Since you are using Kitematic, the socket file hasn't been created yet. You nee to create it manually. You can use the command: eval "$(docker-machine env dev)" and then run your ruby application. Refer: https://github.com/swipely/docker-api https://github.com/kitematic/kitematic/issues/517...
Data in a Docker volume (such as /var/jenkins_home) is not preserved as part of the docker commit operation. This is intentional -- the idea is that you are persisting you data via some other mechanism, such as a host volume (-v /host/directory:/var/jenkins_home) or through the use of a data container...
To accomplish this you'd want to mount Docker socket from host machine into the container you'd like to be sending signals from. See, for instance, this post for explanation and details.
docker,dockerfile,docker-compose
Sorry for so stupid question. I've forgot to include VOLUME base instruction to base image Dockerfile, so other containers can't use this mount point...
If you are using official Wordpress image, you can use the same MySQL container. Just provide different database for each Wordpress container. You can define database for Wordpress container with environment variables, for example: docker run --name wordpress1 --link some-mysql:mysql -p 8080:80 -e WORDPRESS_DB_NAME=wordpress1 -d wordpress docker run --name wordpress2...
It's working fine on my machine: $ docker run --rm -v "$PWD:/src" -p 4000:4000 grahamc/jekyll serve -H 0.0.0.0 Configuration file: /src/_config.yml Source: /src Destination: /src/_site Generating... Build Warning: Layout 'nil' requested in atom.xml does not exist. Build Warning: Layout 'none' requested in feed.xml does not exist. done. Auto-regeneration: enabled for...
docker,exit,exit-code,docker-compose,fig
My workaround for now is to keep the pre-loading container running by adding tail -f /dev/null to the end of the entrypoint script. This keeps the process running, while nothing actual happens.
docker,google-cloud-platform,google-container-engine,google-container-registry
The latest container vm image doesn't support the v1beta2 kubernetes API. You will need to update your manifest to use v1beta3 or v1 (with the corresponding yaml changes). The latest version of the container vm documentation shows a yaml example using the v1 API. ...
Looks like you need to run boot2docker ip and use that
save_docker doesn't seem used in docker or boot2docker or compose or machine. But even before trying to remove it, check if you have: stopped containers that you could remove docker rm -v $(docker ps -a -q -f status=exited) dangling images docker rmi $(docker images --filter "dangling=true" -q --no-trunc) For that,...
linux,nginx,configuration,docker,sendfile
Just inject your own config file as a volume. Let say you have a conf file in /tmp, then you can run the container with : docker run -d -P -v /tmp/my.conf:/etc/nginx/nginx.conf -v /Users/user/site:/usr/share/nginx/html --name mysite nginx ...
The Go compiler doesn't replace or rewrite anything, the code is just wrong. The github.com/rcrowley/go-metrics/influxdb package was written with some other influxdb client code that no longer exists. (Looks like there are a couple github issues open about this already) If you look at the current influxdb/client package, you'll see...
python,docker,pdb,docker-compose
Try running your web container with the --service-ports option: docker-compose run --service-ports web
All the containers in a pod are bound to the same network namespace. This means that (a) they all have the same ip address and (b) that localhost is the same across all the containers. In other words, if you have Apache running in one container in a pod and...
Replace your boot2docker start with boot2docker start && $(boot2docker shellinit) and you are good to go. $(boot2docker shellinit) will export all the Environment variables needed.
Build failing because install.sh return non zero code, when you execute script manually you are ignoring return code, but docker failing build. Usually non-zero return code indicate error, but if in this case everything ok you could ignore this error: RUN wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh || true ...
The docker build executes in a container separate from your host. The container has its own filesystem and is unaware of files on the host system. You may copy files to the container by using COPY /path/on/host /path/on/container in the Dockerfile Currently pip needs files, but they have not been...
you say "Alternatively, is it possible to pass sensitive parameters by reference to a file?" , extract from the doc http://docs.docker.com/reference/commandline/cli/#run --env-file=[] Read in a file of environment variables
You need to secure the registry before you can access it remotely, or explicitly allow all your Docker daemons to access insecure registries. To secure the registry the easiest choice is to buy an SSL certificate for your server, but you can also self-sign the certificate and distribute to clients....
elasticsearch,docker,dockerfile,kibana-4
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself. When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles. Since your CMD only starts gunicorn,...
I do both. My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the...
Yes. With linking containers, exposing and publishing[publish ports containers can communicate with each other or with the host machine.
You're welcome! I initially forgot to post it here too. We all need recognition when we can get it, thanks. I found this quote somewhere, sounds like a starting point to check...I n your EC2 control panel, look at your instance and note the Security Group that is assigned to...
If you expose port 80 on all your containers with webapplications you could link to them to your Nginx container. Here is a small sample docker-compose.yml: app: build: app/ expose: - "80" api: build: api/ expose: - "80" webserver: image: nginx ports: - "80:80" volumes: - default.conf:/etc/nginx/conf.d/default.conf links: - app...
It doesn't seem to be configurable at the moment based on this open issue. However, you could always fork the project and modify the start script to use your own custom docker image. If so, make sure you make it: FROM meteorhacks/meteord:base ...
How do I use the python interpreter from the container without having to type docker exec everytime? What about opening a shell in that container instead? docker exec -it <your container id> /bin/bash -l and then from there use python. Also, how do I access the project code in...
Unfortunately there isn't: --ipv6 is a daemon-wide flag that cannot be overridden on a per-container basis.
docker,coreos,kubernetes,etcd,flannel
Currently, I have to manually register the minion prior to spinning up the minion instance. This is because there is an open issue as of right now not allowing the minion to self-register in certain cases. UPDATE Now I'm using kube-register to register each minion/node on start of the kubelet...
database,shell,docker,docker-compose
The solutions I have found are: docker run tomdavidson/initdb bash -c "`cat initdb.sh`" and Set an ENV VAR equal to your script and set up your Docker image to run the script (of course one can ADD/COPY and use host volumes but that is not this question), for example: docker...
centos,docker,expect,autoexpect
Seems like autoexpect requires the SHELL env var be set but your current running shell (not bash?) does not set it. So try SHELL=bash autoexpect.
amazon-web-services,amazon-ec2,docker,elastic-beanstalk,ec2-container-service
UDP support has been missing still from the GA release of the Amazon EC2 Container Service, see Ports are assumed to be TCP (issue #2) of the Amazon ECS Container Agent. Luckily this surprising gap has already been addressed and the new ECS agent version is pending release - I...
As in the comments, there's no default editor set - strange - $EDITOR env variable is empty. You can login into container with docker exec -it <container> bash and run: apt-get update apt-get install vim or use the following Dockerfile: FROM confluent/postgres-bw:0.1 RUN ["apt-get", "update"] RUN ["apt-get", "install", "-y"] ...
It's not possible to use pure java as there is no framework support for Unix sockets. However, you can use some sort of jni-library for this. Like this one.
As seen in issue 3721, this generally is a disk space issue. The problem is that docker rmi doesn't always work in that case: Getting this in v1.2 on CentOS 6.5 if a disk fills up before the image finishes pulling. Unable to rmi the incomplete image. One "nuclear" option:...
Docker doesn't internally use git today for any kind of resource versioning. It does however: Rely on hashing to uniquely identify the file system layers: this is what can make it resemble git to the user Take initial inspiration in the notion of commit, pushes and pulls One thing that...
Found it. I can supply a name with docker build -t NAME ..
There seems to be some confusion here. It looks like you are trying to log into the web container and access the api container using the localhost address? This isn't going to work. The best solution is just to use the link you've declared; within the web container you should...
The problem should be due to the fact that Docker uses port forwarding, which is not compatible with the way Disque works currently. However you can disable port forwarding in Docker using a 1:1 mapping, with something like this: $ docker run -d -p 7711:7711 ... When this will be...
Normally we think Container as process. One process does one job. So recommend to simplify the container if you can. Set containers for each service...
A Docker image (which will most likely contain the base system from a Linux distribution), is read only and is augmented with several layers that are enabled as you write to a location. So you can share the base image and have "add-ons" if you will. This is called a...
Not you can't add on the fly nodes with list of IPs without restart manually your swarm and adding new hosts. But if you use swarm with service discovery you can do it. You can found here the needed reference to implement docker-swarm with service discovery and how join nodes...
This sounds like a good candidate for a microservice, a long-lived server (implemented in your language of choice) listening on a port for resizing requests that uses graphicsmagick under the hood. Pros: Implementation details of the resizer are kept inside the container. Scalability - you can spin up more containers...
These images are the intermediate layers that make up other images. Each instruction in a Dockerfile results in a new image being created. A lot of Dockerfile instructions only result in changes to the "metadata" rather than the filesystem (e.g EXPOSE, MAINTAINER) which accounts for images with the same size....