Skip to content

πŸ” πŸ“ˆ Exploratory data analysis and Sklearn algorithm test harness for QA Datascience Summative assignment.

License

Notifications You must be signed in to change notification settings

davidmaceachern/QAC020X303

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

10 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

summative-assignment

==============================

Datasets

pricing data postcode data Borough Profile Data

Shapefiles

GIS Boundary London Wards

Preprocessing

Missing Values

  • Missing completely at random (MCAR), data are missing independently of both observed and unobserved data.
  • Missing at random (MAR), given the observed data, data are missing independently of unobserved data.
  • Missing Not at Random (MNAR), missing observations related to values of unobserved data.

Divide by Borough

  • Divide into Datasets by Borough.

Geocoding Postcodes

  1. check for NaN in python list
  2. for each 100 items
  3. call the api
  4. parse the response
  5. map the response

CSV Lookup

  1. load df with postcodes
  2. batch by area code
  3. groupby areacode
  4. open respective csv file ('NSPL_MAY_2019_UK_{}.csv').format(areacode)
  5. choose columns wanted, lookup and append to respective row in df

Joining Tables

  1. postcode lookup (2628568, 41)
  2. our addresses (345551, 16)
  3. our merged (157472, 57)
  4. this means that 188,079 addresses were actually duplicates?

Build Features

  • Combine the lat and the long into 1 feature
  • Imputation
  • Correlation/covariance
  • Feature Selection
  • Plot postcodes on Map
  • Plot GPS Coordinates

Unique Id

This work very well if you have few categories, however in the case of thousands of IDs this will increase your dimensionality too much. What you can do is collect statistics about the target and other features per group and join these onto your set and then remove your categories. This is what is usually done with a high number of categories. You have to be careful not to leak any information about your target into your features though (problem called label leaking).

Train Models

  • Transformation
  • Normal Distribution
  • Cross Validation
  • Linear Regression
  • Support Vector Machine
  • PCA
  • Gaussian and NB
  • KNN
  • Naive Bayes
  • Perceptron
  • Neural Net

Project Organization


β”œβ”€β”€ LICENSE
β”œβ”€β”€ Makefile           <- Makefile with commands like `make data` or `make train`
β”œβ”€β”€ README.md          <- The top-level README for developers using this project.
β”œβ”€β”€ data
β”‚Β Β  β”œβ”€β”€ external       <- Data from third party sources.
β”‚Β Β  β”œβ”€β”€ interim        <- Intermediate data that has been transformed.
β”‚Β Β  β”œβ”€β”€ processed      <- The final, canonical data sets for modeling.
β”‚Β Β  └── raw            <- The original, immutable data dump.
β”‚
β”œβ”€β”€ docs               <- A default Sphinx project; see sphinx-doc.org for details
β”‚
β”œβ”€β”€ models             <- Trained and serialized models, model predictions, or model summaries
β”‚
β”œβ”€β”€ notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
β”‚                         the creator's initials, and a short `-` delimited description, e.g.
β”‚                         `1.0-jqp-initial-data-exploration`.
β”‚
β”œβ”€β”€ references         <- Data dictionaries, manuals, and all other explanatory materials.
β”‚
β”œβ”€β”€ reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
β”‚Β Β  └── figures        <- Generated graphics and figures to be used in reporting
β”‚
β”œβ”€β”€ requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
β”‚                         generated with `pip freeze > requirements.txt`
β”‚
β”œβ”€β”€ setup.py           <- makes project pip installable (pip install -e .) so src can be imported
β”œβ”€β”€ src                <- Source code for use in this project.
β”‚Β Β  β”œβ”€β”€ __init__.py    <- Makes src a Python module
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ data           <- Scripts to download or generate data
β”‚Β Β  β”‚Β Β  └── make_dataset.py
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ features       <- Scripts to turn raw data into features for modeling
β”‚Β Β  β”‚Β Β  └── build_features.py
β”‚   β”‚
β”‚Β Β  β”œβ”€β”€ models         <- Scripts to train models and then use trained models to make
β”‚   β”‚   β”‚                 predictions
β”‚Β Β  β”‚Β Β  β”œβ”€β”€ predict_model.py
β”‚Β Β  β”‚Β Β  └── train_model.py
β”‚   β”‚
β”‚Β Β  └── visualization  <- Scripts to create exploratory and results oriented visualizations
β”‚Β Β      └── visualize.py
β”‚
└── tox.ini            <- tox file with settings for running tox; see tox.testrun.org

Project based on the cookiecutter data science project template. #cookiecutterdatascience

About

πŸ” πŸ“ˆ Exploratory data analysis and Sklearn algorithm test harness for QA Datascience Summative assignment.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages