Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating Xenial image with 100GB system volume for VS1M #14

Closed
dev169 opened this issue Oct 17, 2016 · 4 comments
Closed

Creating Xenial image with 100GB system volume for VS1M #14

dev169 opened this issue Oct 17, 2016 · 4 comments

Comments

@dev169
Copy link

dev169 commented Oct 17, 2016

Hello,

What is the best approach to create a Xenial image with 100GB system volume for VS1M? To be honest, I don't understand why volumes need to be broken up in 50GB chunks to begin with. All I want is a VS1M with a 100GB SSD, in one piece, as advertised.

  • I've made the image building environment following the instructions,
  • This is the Makefile:
NAME =                  ubu100
VERSION =               latest
VERSION_ALIASES =       16.04 latest
TITLE =                 Ubuntu Xenial (16.04) 100G
DESCRIPTION =           Ubuntu Xenial (16.04) with 100G system volume
DOC_URL =
SOURCE_URL =            https://github.com/scaleway/image-ubuntu
VENDOR_URL =
DEFAULT_IMAGE_ARCH =    x86_64


IMAGE_VOLUME_SIZE =     100G
IMAGE_BOOTSCRIPT =      stable
IMAGE_NAME =            Ubuntu100G

## Image tools  (https://github.com/scaleway/image-tools)
all:    docker-rules.mk
docker-rules.mk:
        wget -qO - http://j.mp/scw-builder | bash
-include docker-rules.mk

This is how far I get:

root@buildcowsay:~/ubu100# make image_on_local
wget -qO - http://j.mp/scw-builder | bash
--2016-10-17 16:47:54--  https://raw.githubusercontent.com/scaleway/image-tools/master/builder/docker-rules.mk
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.120.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.120.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 15673 (15K) [text/plain]
Saving to: ‘docker-rules.mk’

docker-rules.mk                           100%[=======================================================================================>]  15.31K  --.-KB/s   in 0s

2016-10-17 16:47:54 (39.9 MB/s) - ‘docker-rules.mk’ saved [15673/15673]

test -f /tmp/create-image-from-http.sh \
        || wget -qO /tmp/create-image-from-http.sh https://github.com/scaleway/scaleway-cli/raw/master/examples/create-image-from-http.sh
chmod +x /tmp/create-image-from-http.sh
touch .overlays
test x86_64 = x86_64 || make setup_binfmt
docker build  -t scaleway/ubu100:amd64-latest .
Sending build context to Docker daemon 23.04 kB
Step 1 : FROM multiarch/ubuntu-debootstrap:amd64-xenial
 ---> 09e6fb2a9c34
Step 2 : MAINTAINER Scaleway <opensource@scaleway.com> (@scaleway)
 ---> Using cache
 ---> 7561d04bbf4b
Step 3 : ENV DEBIAN_FRONTEND noninteractive SCW_BASE_IMAGE scaleway/ubuntu:xenial
 ---> Using cache
 ---> df338cfcde85
Step 4 : COPY ./overlay-${ARCH}/etc/apt/ /etc/apt/
lstat overlay-x86_64/etc/apt/: no such file or directory
docker-rules.mk:339: recipe for target '.docker-container-x86_64.built' failed
make: *** [.docker-container-x86_64.built] Error 1

Can anyone share step-by-step instructions? It's proving to be quite a riddle.

@ghost
Copy link

ghost commented May 6, 2017

I did some changes in Dockerfile and bypassed the ${ARCH} stuff
The problem is when scw script tries to create and run a VC1S server with a 100gb secondary volume

[+] Creating new server in rescue mode with a secondary volume... FATA[0001] cannot execute 'run': failed to start server xxxxxx-xxxxxx-xxxxxx-xxxxxxx ce: StatusCode: 400, Type: invalid_request_error, APIMessage: The total volume size of VC1S instances must be equal or below 50GB docker-rules.mk:185: recipe for target 'image_on_local' failed make: *** [image_on_local] Error 1

I don't know where to change the type of scw instance and I think this will be a problem too
cause VC1M will not allow to have a secondary volume bigger than 50gb (50gb system + 50gb secondary)
and VC1L will not allow a secondary volume less than 150gb

@ViViDboarder
Copy link

ViViDboarder commented May 23, 2017

I'm seeing a similar issue.

I found that I can do SCW_COMMERCIAL_TYPE=VC1M make image_on_local so it uses a different instance type that supports up to 100G of storage. However I get the same error. When I look at the server that was created I actually see that it created the server with the primary volume the desired size (100G) but still tries to create a secondary volume of 50G, making the total volume size over 100G, which is the max allowed for VC1M. I saw similar restrictions with every type of Virtual Server. Even the Intensive ones.

However, this doesn't appear to be a restriction with the Baremetal servers! SCW_COMMERCIAL_TYPE=C2S make image_on_local does seem to boot just fine.

Again, unfortunately it still fails due to the disk not being /dev/vda as per this FIXME

Since the make script only downloads create-image-from-http.sh if it's not found in /tmp, this can be temporarily fixed. I edited /tmp/create-image-from-http.sh and change /dev/vda to /dev/ndb0 and run the SCW_COMMERCIAL_TYPE=C2S make image_on_local again and got f, yet another error.

ext2fs_check_if_mount: Can't check if filesystem is mounted due to missing mtab file while determining whether /dev/nbd0 is mounted.

So then I added a new line right after the FIXME and before the mkfs command like to execute ln -s /proc/self/mounts /etc/mtab (per this forum post) and rerun.

Now I'm getting the following

mke2fs 1.42.13 (17-May-2015)
mkfs.ext4: Device size reported to be zero.  Invalid partition specified, or
	partition table wasn't reread after running fdisk, due to
	a modified partition being busy and in use.  You may need to reboot
	to re-read your partition table.

When I open up a shell on the box, I can't seem to find any information about the partitions at all. It just fdisk -l shows nothing. This didn't seem to be the problem with Virtual Servers, so I decided to try that again...

I added the ability to avoid recreating servers by taking a server id through an environment variable. This way I could log into the web interface, remove the extra 50G disk so that my VS1M was exactly 100G of storage. This allows it to boot up just fine. I then started it again from the web. Then I provided the server ID and ran the script and it worked almost entirely fine except it was unable to tag the image. I did that myself from the snapshot in the web interface and it seems to have actually worked!

I can't find a good way to do this via the command line, but I did figure out that while we can't make it work with a single volume from the CLI for a VC1M server, we can make that secondary volume 1G. This means that if you use a VC1M, we need to make make our primary 99G (total of 100). Alternately, you can keep it at 100G if you bump up to a VC1L and make the secondary 51G.

If you edit the shell script and add the secondary volume size --volume=1G to the run command you can make your image size of 99G and it will go just fine. Again, tagging may be borked, but you can do that from the web pretty easily.

Hope that helps! If I can figure out something even better I can send a patch.

@ViViDboarder
Copy link

Found the source of the bug: https://github.com/scaleway/scaleway-cli/blob/fc7c0076a0241a42721dfd15f57025c3150fd765/pkg/api/helpers.go#L340

It appears that regardless of the total size of the primary volume, it will attempt to add a second volume of at least 50G for anything greater than the smallest sizes. This causes it to go over the max size. The workaround I have listed above will have to do until this is resolved or I can figure out why it won't work with bare metal.

@ViViDboarder
Copy link

I just pushed a new branch for the cli that, if merged, should fix this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants