Skip to content
This repository has been archived by the owner on May 12, 2023. It is now read-only.

Commit

Permalink
Merge pull request #91 from ndlib/elasticsearch-pipeline
Browse files Browse the repository at this point in the history
elasticsearch pipeline
  • Loading branch information
rdoughty authored Oct 7, 2019
2 parents 3076f7f + a590c85 commit 3bf7880
Show file tree
Hide file tree
Showing 9 changed files with 517 additions and 0 deletions.
128 changes: 128 additions & 0 deletions deploy/cdk/marble-elasticsearch-pipeline/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# VSCode:
.vscode/

# AWS CDK:
cdk.out/
deploy.json

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/
77 changes: 77 additions & 0 deletions deploy/cdk/marble-elasticsearch-pipeline/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Description
This project creates an AWS pipeline that deploys an elasticsearch cluster specified in the
repository(see cdk.json).

# Prerequisites
Github oauth token - https://help.github.com/en/articles/git-automation-with-oauth-tokens

# Configuration
Edit cdk.json to point to a different Github elasticsearch application source if desired.

# Installation
Once cdk.json is configured properly run through the standard cdk commands to deploy.
```
python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txt
cdk deploy
deactivate
```

# Development - AWS CDK local

This is a blank project for Python development with CDK.

The `cdk.json` file tells the CDK Toolkit how to execute your app.

This project is set up like a standard Python project. The initialization
process also creates a virtualenv within this project, stored under the .env
directory. To create the virtualenv it assumes that there is a `python3`
(or `python` for Windows) executable in your path with access to the `venv`
package. If for any reason the automatic creation of the virtualenv fails,
you can create the virtualenv manually.

To manually create a virtualenv on MacOS and Linux:

```
$ python3 -m venv .env
```

After the init process completes and the virtualenv is created, you can use the following
step to activate your virtualenv.

```
$ source .env/bin/activate
```

If you are a Windows platform, you would activate the virtualenv like this:

```
% .env\Scripts\activate.bat
```

Once the virtualenv is activated, you can install the required dependencies.

```
$ pip install -r requirements.txt
```

At this point you can now synthesize the CloudFormation template for this code.

```
$ cdk synth
```

To add additional dependencies, for example other CDK libraries, just add
them to your `setup.py` file and rerun the `pip install -r requirements.txt`
command.

# Useful commands

* `cdk ls` list all stacks in the app
* `cdk synth` emits the synthesized CloudFormation template
* `cdk deploy` deploy this stack to your default AWS account/region
* `cdk diff` compare deployed stack with current state
* `cdk docs` open CDK documentation

Enjoy!
10 changes: 10 additions & 0 deletions deploy/cdk/marble-elasticsearch-pipeline/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
#!/usr/bin/env python

from aws_cdk import core
from marble_elasticsearch_pipeline.marble_elasticsearch_pipeline_stack import MarbleElasticsearchPipelineStack


app = core.App()
meps = MarbleElasticsearchPipelineStack(app, "marble-elasticsearch-pipeline")
meps.add_stages()
app.synth()
9 changes: 9 additions & 0 deletions deploy/cdk/marble-elasticsearch-pipeline/cdk.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{
"app": "python3 app.py",
"context": {
"repo_oauth_path": "/all/github/ndlib-git",
"repo_name": "marble-elasticsearch",
"repo_branch": "master",
"repo_owner": "ndlib"
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
from aws_cdk import core
import aws_cdk.aws_codebuild as codebuild


class BuildProject():
def __init__(self, scope: core.Construct, role, stage, context=None):
self.scope = scope
self.role = role
self.stage = stage
self.context = context

def pipeline_project(self):
project_name = f"{self.stage}Project"
env = {'buildImage': codebuild.LinuxBuildImage.STANDARD_2_0}
env_vars = {
'CI': {'value': 'true', type: codebuild.BuildEnvironmentVariableType.PLAINTEXT},
'STAGE': {'value': self.stage, type: codebuild.BuildEnvironmentVariableType.PLAINTEXT},
}
artifacts = {
'files': [
'cdk.out/*',
'scripts/codebuild/**/*'
],
}
return codebuild.PipelineProject(self.scope, project_name, role=self.role, environment=env,
build_spec=codebuild.BuildSpec.from_object({'version': '0.2', 'phases': self._get_phases(), 'artifacts': artifacts}))

def _get_phases(self):
return {
'install': {
'runtime-versions': {
'python': 3.7,
},
'commands': [
'echo "Ensure that the codebuild directory is executable"',
'chmod -R 755 ./scripts/codebuild/*',
'./scripts/codebuild/install.sh'
],
},
'pre_build': {
'commands': [f"./scripts/codebuild/pre_build.sh {self.context['es_stackname']}-{self.stage}"],
},
'build': {
'commands': ['./scripts/codebuild/deploy.sh'],
},
'post_build': {
'commands': ['./scripts/codebuild/post_build.sh'],
},
}
Loading

0 comments on commit 3bf7880

Please sign in to comment.