amazonka-datapipeline: Amazon Data Pipeline SDK.

[ aws, library, mpl ] [ Propose Tags ]

AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data. AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management. AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.

The types from this library are intended to be used with amazonka, which provides mechanisms for specifying AuthN/AuthZ information and sending requests.

Use of lenses is required for constructing and manipulating types. This is due to the amount of nesting of AWS types and transparency regarding de/serialisation into more palatable Haskell values. The provided lenses should be compatible with any of the major lens libraries such as lens or lens-family-core.

See Network.AWS.DataPipeline and the AWS API Reference to get started.


[Skip to Readme]

Downloads

Note: This package has metadata revisions in the cabal description newer than included in the tarball. To unpack the package including the revisions, use 'cabal get'.

Maintainer's Corner

Package maintainers

For package maintainers and hackage trustees

Candidates

  • No Candidates
Versions [RSS] 0.0.0, 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.2.0, 0.2.1, 0.2.2, 0.2.3, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 1.0.0, 1.0.1, 1.1.0, 1.2.0, 1.2.0.1, 1.2.0.2, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.3.1, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.5.0, 1.6.0, 1.6.1, 2.0
Dependencies amazonka-core (>=1.3.6 && <1.3.7), base (>=4.7 && <4.19) [details]
License LicenseRef-OtherLicense
Copyright Copyright (c) 2013-2015 Brendan Hay
Author Brendan Hay
Maintainer Brendan Hay <brendan.g.hay@gmail.com>
Revised Revision 1 made by jack at 2024-05-13T07:45:27Z
Category Network, AWS, Cloud, Distributed Computing
Home page https://github.com/brendanhay/amazonka
Bug tracker https://github.com/brendanhay/amazonka/issues
Source repo head: git clone git://github.com/brendanhay/amazonka.git
Uploaded by BrendanHay at 2015-11-21T10:50:43Z
Distributions LTSHaskell:2.0, NixOS:2.0
Reverse Dependencies 1 direct, 0 indirect [details]
Downloads 37606 total (63 in the last 30 days)
Rating (no votes yet) [estimated by Bayesian average]
Your Rating
  • λ
  • λ
  • λ
Status Docs available [build log]
Last success reported on 2015-12-09 [all 3 reports]

Readme for amazonka-datapipeline-1.3.6

[back to package description]

Amazon Data Pipeline SDK

Version

1.3.6

Description

AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data.

AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management.

AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.

Documentation is available via Hackage and the AWS API Reference.

The types from this library are intended to be used with amazonka, which provides mechanisms for specifying AuthN/AuthZ information and sending requests.

Use of lenses is required for constructing and manipulating types. This is due to the amount of nesting of AWS types and transparency regarding de/serialisation into more palatable Haskell values. The provided lenses should be compatible with any of the major lens libraries lens or lens-family-core.

Contribute

For any problems, comments, or feedback please create an issue here on GitHub.

Note: this library is an auto-generated Haskell package. Please see amazonka-gen for more information.

Licence

amazonka-datapipeline is released under the Mozilla Public License Version 2.0.

Parts of the code are derived from AWS service descriptions, licensed under Apache 2.0. Source files subject to this contain an additional licensing clause in their header.