Skip to content

Conversation

@nathanleclaire
Copy link

ping @bfirsh I think you'll like this 😉

This is more efficient than polling status in series, especially when
the number of hosts is large.

@nathanleclaire nathanleclaire force-pushed the concurrent-host-status-polling branch from 3fdbfb3 to ba699f5 Compare October 12, 2014 08:06
@nathanleclaire
Copy link
Author

I just realized that there is a critical bug in this - the sends come over the channel out of order with the goroutines that prints them for each host. DO NOT MERGE until I push the fix.

This is more efficient than polling status in series, especially when
the number of hosts is large.

Signed-off-by: Nathan LeClaire <[email protected]>
@nathanleclaire nathanleclaire force-pushed the concurrent-host-status-polling branch from f975b8c to 3de8242 Compare October 18, 2014 23:48
@nathanleclaire
Copy link
Author

OK fixed 👍 It should work fine now. The order of the hosts listed will vary, but we can always add a customer Sorter if we want to have them display the same way every time. They will be consistent on reporting status with the fix.

@nathanleclaire
Copy link
Author

The more that I think about it, the more that I think we do really want a sorter eventually (I might work on it soon, but my first priority is cleaning up the AWS driver and making a PR).

@bfirsh bfirsh merged this pull request into bfirsh:host-management Oct 22, 2014
@bfirsh
Copy link
Owner

bfirsh commented Oct 22, 2014

Thanks @nathanleclaire!

@bfirsh
Copy link
Owner

bfirsh commented Oct 22, 2014

Not sure why you were saving hosts? 8d0888b

@nathanleclaire
Copy link
Author

Not sure why you were saving hosts? 8d0888b

Nice catch - I think this is an artifact from the bad old days when the driver was only half done and I was relying on the call in docker hosts list to get the IP address for the newly started instance.

bfirsh pushed a commit that referenced this pull request Nov 18, 2014
Signed-off-by: Malte Janduda <[email protected]>
bfirsh pushed a commit that referenced this pull request Apr 16, 2015
bfirsh pushed a commit that referenced this pull request Jul 31, 2015
TL;DR: check for IsExist(err) after a failed MkdirAll() is both
redundant and wrong -- so two reasons to remove it.

Quoting MkdirAll documentation:

> MkdirAll creates a directory named path, along with any necessary
> parents, and returns nil, or else returns an error. If path
> is already a directory, MkdirAll does nothing and returns nil.

This means two things:

1. If a directory to be created already exists, no error is returned.

2. If the error returned is IsExist (EEXIST), it means there exists
a non-directory with the same name as MkdirAll need to use for
directory. Example: we want to MkdirAll("a/b"), but file "a"
(or "a/b") already exists, so MkdirAll fails.

The above is a theory, based on quoted documentation and my UNIX
knowledge.

3. In practice, though, current MkdirAll implementation [1] returns
ENOTDIR in most of cases described in #2, with the exception when
there is a race between MkdirAll and someone else creating the
last component of MkdirAll argument as a file. In this very case
MkdirAll() will indeed return EEXIST.

Because of #1, IsExist check after MkdirAll is not needed.

Because of #2 and #3, ignoring IsExist error is just plain wrong,
as directory we require is not created. It's cleaner to report
the error now.

Note this error is all over the tree, I guess due to copy-paste,
or trying to follow the same usage pattern as for Mkdir(),
or some not quite correct examples on the Internet.

[v2: a separate aufs commit is merged into this one]

[1] https://github.com/golang/go/blob/f9ed2f75/src/os/path.go

Signed-off-by: Kir Kolyshkin <[email protected]>
bfirsh pushed a commit that referenced this pull request Apr 22, 2016
Fix exec start api with detach and AttachStdin at same time. fixes #2
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants