extract package¶
Submodules¶
extract.csv_dataframe module¶
-
extract.csv_dataframe.
is_valid_file
(parser, arg)¶
-
extract.csv_dataframe.
main
()¶
-
extract.csv_dataframe.
parser
()¶ Parser function to run arguments from the command line and to add description to sphinx.
-
extract.csv_dataframe.
read_crow_file
(file, datecol)¶ parses the CROW afvaldata Args:
file (xls/xlsx): containing at least a date column datecol: informat %Y-m-%d %H:%M:%S- Returns:
- pd.DataFrame: cleaned data frame with datum and time column added
-
extract.csv_dataframe.
read_mora_file
(file, datecol)¶
-
extract.csv_dataframe.
strip_cols
(df)¶ simple utility function to clean dataframe columns
-
extract.csv_dataframe.
valid_date
(s)¶
extract.download_bbga_by_variable__area_year module¶
-
extract.download_bbga_by_variable__area_year.
main
()¶ Example using total citizens by department in 2017 Written to schema ‘bi_afval’ and table d_bbga_cd’
-
extract.download_bbga_by_variable__area_year.
statisticsByAreaByYear
(variableName, AreaType, Year)¶ Area options: stadsdeel, gebiedsberichtwerken, buurtcombinatie, buurt Year options: e.g., 2015, 2016, 2017 variableNames can be found here: https://api.datapunt.amsterdam.nl/bbga/variabelen/
-
extract.download_bbga_by_variable__area_year.
writeStatisticsTable2PGTable
(schema, tableName, df_std)¶ Change database conenction parameters with your own login credentials and make sure that schema exists
extract.download_from_api_brk module¶
-
extract.download_from_api_brk.
getJsonData
(url, accessToken)¶ Get a json response from a url with accesstoken.
- Args:
- url: api endpoint
- accessToken: acces token generated using the auth helper: GetAccessToken().getAccessToken(usertype=’employee_plus’, scopes=’BRK/RS,BRK/RSN/,BRK/RO’)
- Returns:
- parsed json or error message
-
extract.download_from_api_brk.
main
()¶
-
extract.download_from_api_brk.
parser
()¶ Parser function to run arguments from commandline and to add description to sphinx docs.
extract.download_from_api_kvk module¶
-
extract.download_from_api_kvk.
get_kvk_json
(url, params, api_key=None)¶ Get a json response from a url, provided params + api_key. Args:
url: api endpoint params: kvkNumber, branchNumber, rsin, street, houseNumber, postalCode,
city, tradeName, or provide lists/dicts of valuesapi_key: kvk api_key. add KVK_API_KEY to your ENV variables
- Returns:
- parsed json or error message
-
extract.download_from_api_kvk.
main
()¶
-
extract.download_from_api_kvk.
parser
()¶ Parser function to run arguments from commandline and to add description to sphinx docs.
-
extract.download_from_api_kvk.
response_to_json
(response)¶
extract.download_from_api_tellus module¶
-
extract.download_from_api_tellus.
conversionListCvalues
(metadata)¶ Create a conversion dictionairy for values in tellus api which consists of 60 speed +length values named: c1 to c60
-
extract.download_from_api_tellus.
getJsonData
(url, accessToken)¶ Get a json response from a url with accesstoken.
- Args:
- url: api endpoint
- accessToken: acces token generated using the auth helper: GetAccessToken().getAccessToken(usertype=’employee’, scopes=’TLLS/R’)
- Returns:
- parsed json or error message
-
extract.download_from_api_tellus.
get_data
(url_api, endpoint, metadata, accessToken, limit)¶ Get and flatten all the data from the api.
- Args:
url_api: get the main api url:
https://api.data.amsterdam.nl/tellus
get one endpoint:
tellus
get a list of dictionaries from other endpoints, in this case: for tellus location, speed and length.
accessToken: acces token generated using the auth helper: GetAccessToken().getAccessToken()
limit: set the number of pages you want to retrieve, ideal for testing first:
10
- Returns:
- A list containing multiple items which are all reformatted to a flattened json with added metadata.
-
extract.download_from_api_tellus.
main
()¶
-
extract.download_from_api_tellus.
parser
()¶ Parser function to run arguments from commandline and to add description to sphinx docs.
-
extract.download_from_api_tellus.
reformatData
(item, tellus_metadata, cvalues)¶ Reformat the data from a matrix to a flattend dict with label and tellus names.
- Args:
- item: one recorded hour which contains 60 types of registrations c1-c60.
- tellus_metadata: list of description values for each tellus.
- cvalues: converted 60 values to add the proper labels to c1 to c6 counted record.
- Returns:
- 60 rows by c-value with metadata an label descriptions
extract.download_from_api_with_authentication module¶
-
extract.download_from_api_with_authentication.
getJsonData
(url, access_token)¶ Get a json response from a url with accesstoken.
- Args:
- url: api endpoint
- accessToken: acces token generated using the auth helper: GetAccessToken().getAccessToken(usertype=’employee_plus’, scopes=’BRK/RS,BRK/RSN/,BRK/RO’)
- Returns:
- parsed json or error message
-
extract.download_from_api_with_authentication.
main
()¶
-
extract.download_from_api_with_authentication.
parser
()¶ Parser function to run arguments from commandline and to add description to sphinx docs.
-
extract.download_from_api_with_authentication.
retrywithtrailingslash
(url, access_token)¶
extract.download_from_catalog module¶
-
extract.download_from_catalog.
download_all_files
(metadata, download_directory)¶ Download all files from metadata resources list.
- Args:
- metadata: json dictonary from ckan with all the metadata including the resources list of all files.
- download_directory: path where to store the files from the files, for example data.
- Result:
- Unzipped and created dir filled with all data in the download_directory, if this does not yet exists.
-
extract.download_from_catalog.
download_file
(file_location, target)¶
-
extract.download_from_catalog.
download_metadata
(url)¶ Download files from data catalog response id.
- Args:
- url: full data.amsterdam.nl url of the desired dataset, for example: https://data.amsterdam.nl/#?dte=catalogus%2Fapi%2F3%2Faction%2Fpackage_show%3Fid%3D5d84c216-b826-4406-8297-292678dee13c
- Result:
- All the Metadata from this dataset as a json dictonary, with the owner, refresh data, resource url’s to the desired files, etc.
-
extract.download_from_catalog.
get_catalog_package_id
(url)¶ Retrieve package id from full url from data.amsterdam.nl, for example: catalogus/api/3/action/package_show?id=c1f04a62-8b69-4775-ad83-ce2647a076ef
- Args:
- url: full data.amsterdam.nl url of the desired dataset, for example: https://data.amsterdam.nl/#?dte=catalogus%2Fapi%2F3%2Faction%2Fpackage_show%3Fid%3D5d84c216-b826-4406-8297-292678dee13c
- Result:
- Unique id number of package.
-
extract.download_from_catalog.
main
()¶
-
extract.download_from_catalog.
parser
()¶ Parser function to run arguments from commandline and to add description to sphinx.
extract.download_from_objectstore module¶
-
extract.download_from_objectstore.
download_container
(connection, container, prefix, output_folder)¶ Download file from objectstore.
- Args:
- connection: connection session using the objectstore_connection function from the helpers.connections
- prefix: tag or folder name of file, for example subfolder/subsubfolder
- output_folder = ‘/{folder}/ ‘
- Returns:
- Written file /{folder}/{prefix}/{file}
-
extract.download_from_objectstore.
download_containers
(connection, prefixes, output_folder)¶ Download multiple files from the objectstore.
- Args:
- connection: connection session using the objectstore_connection function from the helpers.connections
- prefixes: multiple folders where the files are located, for example aanvalsplan_schoon/crow,aanvalsplan_schoon/mora
- output_folder: local folder to write files into, for example app/data for a docker setup
- Result:
- Loops through download_container function for each prefix (=folder)
-
extract.download_from_objectstore.
get_full_container_list
(connection, container, **kwargs)¶ Get all files stored in container (incl. sub-containers)
- Args:
- connection: connection session using the objectstore_connection function from the helpers.connections
- container: “name of the root container/folder in objectstore”
- Returns:
- Generator object with all containers.
-
extract.download_from_objectstore.
main
()¶
-
extract.download_from_objectstore.
parser
()¶ Parser function to run arguments from commandline and to add description to sphinx docs.
extract.download_from_wfs module¶
-
extract.download_from_wfs.
get_layer_from_wfs
(url_wfs, layer_name, srs, outputformat)¶ Get layer from a wfs service. Args:
url_wfs: full url of the WFS including https, excluding /?:
https://map.data.amsterdam.nl/maps/gebieden
layer_name: Title of the layer:
stadsdeel
srs: coordinate system number, excluding EPSG:
28992
outputformat: leave empty to return standard GML, else define json, geojson, txt, shapezip:
geojson
- Returns:
- The layer in the specified output format.
-
extract.download_from_wfs.
get_layers_from_wfs
(url_wfs)¶ Get all layer names in WFS service, print and return them in a list.
-
extract.download_from_wfs.
get_multiple_geojson_from_wfs
(url_wfs, layer_names, srs, output_folder)¶ Get all layers and save them as a geojson
- Args:
url_wfs: full url of the WFS including https, excluding /?:
https://map.data.amsterdam.nl/maps/gebieden
layer_names: single or multiple titles of the layers, separated by a comma without spaces:
stadsdeel,buurtcombinatie,gebiedsgerichtwerken,buurt
srs: coordinate system number, excluding EPSG:
28992
output_folder: define the folder to save the files:
path_to_folder/another_folder
-
extract.download_from_wfs.
main
()¶
-
extract.download_from_wfs.
parser
()¶ Parser function to run arguments from the command line and to add description to sphinx.