Kinesis 的整体架构如下:
Kinesis 的基本术语
Kinesis Data Stream
Kinesis Data Stream 实时吸收大量数据、持久存储数据并使这些数据可供使用。其由多个分片组成,每个分片里面包含一系列数据记录,每个数据记录有一个由 Kinesis Data Stream 分配的序列号。
数据记录
数据记录是存储在 Kinesis data stream 中的数据单位。数据记录由序列号、分区键和数据 Blob 组成,数据 Blob 可以是 最多 1 MB,并且是不可变的。
保留周期
数据记录在添加到流中后保存的时长,默认保存 24 小时,可以设置为更久的值,最长保存 168 小时,但是设置超过 24 小时的保留周期是会额外收费哦。
分片
分片可以理解为 Kinesis 流的分区,一个 Kinesis Data Stream 由一个或多个分片组成。每个分片可以支持最大每秒 1000 条记录或 1MB 数据量的写入速度,每秒 5 次事务或者 2MB 数据量的读取速度,Kinesis 流总的数据容量是各个分片之和,所以如果数据速率发生改变,可以相应增减分配给流的分片数量。
分区键
每个数据放入流中时,都必须指定一个不超过 256 个字节的 Unicode 字符串作为分区键,Kinesis 将会对这个分区键进行 MD5 哈希映射成一个 128 为的整数值,然后分配到对应的分片中去,这样就实现了对传入的数据分区。
通常,分区键的数量应比分片的数量多得多。这是因为分区键用来确定如何将数据记录映射到特定分片。如果有足够的分区键,数据可以在流中的分片间均匀分布。
序列号
每条数据记录均有一个序列号,此序列号在其分片中是唯一的。当向 Kinesis 流中写入记录时,Kinesis Data Streams 将分配序列号。同一分区键的序列号通常会随时间变化增加;写入请求之间的时间段越长,序列号则越大。
Producer
Producer 用于产生数据记录并放入 Kinesis Data Stream 中,例如发送日志数据到流的 Web 服务器是 Producer。
Consumer
Consumer 从 Kinesis Data Stream 获取记录并进行处理,一个 Kinesis 流可以有多个 Consumer,每个 Consumer 可以同时单独使用流中的数据。
Kinesis Producer Library (KPL)
Kinesis Producer Library 是用于编写 Producer 应用的库,包含了批上传 record,容错,监控等功能,可以帮助我们简化 producer 的开发。
Kinesis Client Library (KCL)
Kinesis Client Library 是一个用于编写 Consumer 应用的库,简化读取流的操作并且具有容错性。
代码实操
golang
发送消息的服务端producer
package main
import (
"flag"
"fmt"
"github.com/aws/aws-sdk-go/aws/credentials"
"log"
"time"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/kinesis"
)
func main() {
var (
streamName = flag.String("stream", "Foo", "Stream name")
kinesisEndpoint = flag.String("endpoint", "https://kinesis.us-west-2.amazonaws.com", "Kinesis endpoint")
awsRegion = flag.String("region", "us-west-2", "AWS Region")
accessKeyID = flag.String("key_id", "AKIA4WK2adad2RB4O", "AWS Access Key ID")
secretAccessKey = flag.String("key", "TMjKPWrWutkMcwdadadadOs2i3gw4VMWI1++Pu", "AWS Secret Access Key")
)
flag.Parse()
var records []*kinesis.PutRecordsRequestEntry
var client = kinesis.New(session.Must(session.NewSession(
aws.NewConfig().
WithEndpoint(*kinesisEndpoint).
WithRegion(*awsRegion).
WithLogLevel(3),
)))
//赋值AccessKeyID SecretAccessKey 添加权限,morn读取本地配置文件~/.aws/credentials信息设置权限
client.Config.Credentials = credentials.NewStaticCredentials(*accessKeyID, *secretAccessKey, "")
// create stream if doesn't exist
//if err := createStream(client, streamName); err != nil {
// log.Fatalf("create stream error: %v", err)
//}
// loop over file data
//b := bufio.NewScanner(os.Stdin)
for true {
time.Sleep(time.Second * 3)
fmt.Println(records)
records = append(records, &kinesis.PutRecordsRequestEntry{
Data: []byte("566"),
PartitionKey: aws.String(time.Now().Format(time.RFC3339Nano)),
})
fmt.Println(records)
putRecords(client, streamName, records)
records = nil
}
}
func createStream(client *kinesis.Kinesis, streamName *string) error {
resp, err := client.ListStreams(&kinesis.ListStreamsInput{})
if err != nil {
return fmt.Errorf("list streams error: %v", err)
}
for _, val := range resp.StreamNames {
if *streamName == *val {
return nil
}
}
_, err = client.CreateStream(
&kinesis.CreateStreamInput{
StreamName: streamName,
ShardCount: aws.Int64(2),
},
)
if err != nil {
return err
}
return client.WaitUntilStreamExists(
&kinesis.DescribeStreamInput{
StreamName: streamName,
},
)
}
func putRecords(client *kinesis.Kinesis, streamName *string, records []*kinesis.PutRecordsRequestEntry) {
_, err := client.PutRecords(&kinesis.PutRecordsInput{
StreamName: streamName,
Records: records,
})
if err != nil {
log.Fatalf("error putting records: %v", err)
}
fmt.Print(".")
}
接收消息的客户端consumer
package main
import (
"context"
"flag"
"fmt"
"log"
"os"
"os/signal"
"syscall"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/kinesis"
"github.com/harlow/kinesis-consumer"
)
// A myLogger provides a minimalistic logger satisfying the Logger interface.
type myLogger struct {
logger *log.Logger
}
// Log logs the parameters to the stdlib logger. See log.Println.
func (l *myLogger) Log(args ...interface{}) {
l.logger.Println(args...)
}
func main() {
var (
stream = flag.String("stream", "Foo", "Stream name")
kinesisEndpoint = flag.String("endpoint", "https://kinesis.us-west-2.amazonaws.com", "Kinesis endpoint")
awsRegion = flag.String("region", "us-west-2", "AWS Region")
accessKeyID = flag.String("key_id", "AKIA4WK2adadad2RB4O", "AWS Access Key ID")
secretAccessKey = flag.String("key", "TMjKPWrWutkMc1sdsaadaOs2i3gw4VMWI1++Pu", "AWS Secret Access Key")
)
flag.Parse()
// client
var client = kinesis.New(session.Must(session.NewSession(
aws.NewConfig().
WithEndpoint(*kinesisEndpoint).
WithRegion(*awsRegion),
)))
//赋值AccessKeyID SecretAccessKey 添加权限,morn读取本地配置文件~/.aws/credentials信息设置权限
client.Config.Credentials = credentials.NewStaticCredentials(*accessKeyID, *secretAccessKey, "")
// consumer
c, err := consumer.New(
*stream,
consumer.WithClient(client),
)
if err != nil {
log.Fatalf("consumer error: %v", err)
}
// scan
ctx := trap()
err = c.Scan(ctx, func(r *consumer.Record) error {
fmt.Println(string(r.Data))
return nil // continue scanning
})
if err != nil {
log.Fatalf("scan error: %v", err)
}
}
func trap() context.Context {
ctx, cancel := context.WithCancel(context.Background())
sigs := make(chan os.Signal, 1)
signal.Notify(sigs, os.Interrupt, syscall.SIGTERM, syscall.SIGQUIT)
go func() {
sig := <-sigs
log.Printf("received %s", sig)
cancel()
}()
return ctx
}
执行示例:
python
发送消息的服务端
import json
import boto3
import random
import datetime
kinesis = boto3.client(service_name='kinesis',region_name='us-west-2',
aws_access_key_id='AKIA4WKadadad2RB4O',aws_secret_access_key='TMjKPWrWutkMcadadadOs2i3gw4VMWI1++Pu')
def getReferrer():
data = {}
now = datetime.datetime.now()
str_now = now.isoformat()
data['EVENT_TIME'] = str_now
data['TICKER'] = random.choice(['AAPL', 'AMZN', 'MSFT', 'INTC', 'TBV'])
price = random.random() * 100
data['PRICE'] = round(price, 2)
return data
while True:
data = json.dumps(getReferrer())
print(data)
kinesis.put_record(
StreamName="Foo",
Data=data,
PartitionKey="566")
接收消息的客户端
import time
import boto3
if __name__ == '__main__':
client = boto3.client('kinesis', aws_access_key_id='AKIA4WK2adadad2RB4O',
aws_secret_access_key='TMjKPWrWutkMcwdwdwddOs2i3gw4VMWI1++Pu',
region_name='us-west-2')
streamName = "Foo"
response = client.describe_stream(StreamName=streamName)
print("::stream description::", response)
my_shard_id = response['StreamDescription']['Shards'][0]['ShardId']
shard_iterator = client.get_shard_iterator(StreamName=streamName,
ShardId=my_shard_id,
ShardIteratorType='TRIM_HORIZON')
my_shard_iterator = shard_iterator['ShardIterator']
print("::shard_iterator::", my_shard_iterator)
time.sleep(1)
record_response = client.get_records(ShardIterator=my_shard_iterator,
Limit=30)
for v in record_response["Records"]:
print(":: Records ::", v["Data"])
aws命令行
aws cli 安装配置教程https://docs.aws.amazon.com/zh_cn/cli/latest/userguide/cli-chap-install.html
大概就是配置本地用户信息文件
配置~/.aws/credentials
[default]
aws_access_key_id=AKIA4WK24L73T121233RB4O
ws_secret_access_key=TMjKPWrWutkM231s2i3gw4VMWI1++Pu
配置~/.aws/config
[default]
region=us-west-2
output=json
命令行调试教程
https://docs.amazonaws.cn/streams/latest/dev/fundamental-stream.html#create-stream
本人的执行输出
wangluludeMacBook-Pro% aws kinesis create-stream --stream-name Foo --shard-count 1
wangluludeMacBook-Pro% aws kinesis describe-stream --stream-name Foo
{
"StreamDescription": {
"KeyId": null,
"EncryptionType": "NONE",
"StreamStatus": "ACTIVE",
"StreamName": "Foo",
"Shards": [
{
"ShardId": "shardId-000000000000",
"HashKeyRange": {
"EndingHashKey": "340282366920938463463374607431768211455",
"StartingHashKey": "0"
},
"SequenceNumberRange": {
"StartingSequenceNumber": "49602204528562415675118160344216217127049708073102868482"
}
}
],
"StreamARN": "arn:aws:kinesis:us-west-2:872605835255:stream/Foo",
"EnhancedMonitoring": [
{
"ShardLevelMetrics": []
}
],
"StreamCreationTimestamp": 1576032597.0,
"RetentionPeriodHours": 24
}
}
wangluludeMacBook-Pro% aws kinesis list-streams
{
"StreamNames": [
"Foo",
"ayla_dss2_test"
]
}
wangluludeMacBook-Pro% aws kinesis put-record --stream-name Foo --partition-key 123 --data testdata
{
"ShardId": "shardId-000000000000",
"SequenceNumber": "49602204528562415675118160394349161940648767999052873730"
}
wangluludeMacBook-Pro% aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name Foo
{
"ShardIterator": "AAAAAAAAAAGjBMl1A46coymI5xKVezCLQBuqeEfVof+tC6SzgV9lihtn836b/42RFahJTRqyKeGCptFFgFReDK9K3G3TY4I1Zh/usFnYwDCAsnjWCSR+udYrwNs5AfhF/22p4KDcK9cBTvw69X75bEwWjo8dU9/L/jfWHWlFNxYfdErq5oG0jcsmgJv0b+COPKWWGv/OxDypuHaTYKLsT7Pz1Mur2aZ3"
}
wangluludeMacBook-Pro% aws kinesis get-records --shard-iterator AAAAAAAAAAGjBMl1A46coymI5xKVezCLQBuqeEfVof+tC6SzgV9lihtn836b/42RFahJTRqyKeGCptFFgFReDK9K3G3TY4I1Zh/usFnYwDCAsnjWCSR+udYrwNs5AfhF/22p4KDcK9cBTvw69X75bEwWjo8dU9/L/jfWHWlFNxYfdErq5oG0jcsmgJv0b+COPKWWGv/OxDypuHaTYKLsT7Pz1Mur2aZ3
{
"Records": [
{
"Data": "dGVzdGRhdGE=",
"PartitionKey": "123",
"ApproximateArrivalTimestamp": 1576032634.084,
"SequenceNumber": "49602204528562415675118160394349161940648767999052873730"
}
],
"NextShardIterator": "AAAAAAAAAAEYcmeqYcUV/tWBbzfN72GrNP2EHkUDXM+9eGt9PdkIf5O8rDVCELprxPxNhx6QDSdj2H0llUhFnvgUtSv5la0GsK3ip62+XsOQHKGAv+Zl6nKDl0Fk7vRAoxzYJ9vY90ziTHxcTfG6QEZhLSUCY0Ronw2HJD5HWNWH4/fHnurIKHN3HW2BIJ0+XDi0p0kTdYP6t+Fh2wb6CGDvYJ9rkmXQ",
"MillisBehindLatest": 0
}
wangluludeMacBook-Pro% aws kinesis put-record --stream-name Foo --partition-key 123 --data testdata
{
"ShardId": "shardId-000000000000",
"SequenceNumber": "49602204528562415675118160555668223310024894555132788738"
}
参考文档
https://docs.aws.amazon.com/zh_cn/streams/latest/dev/introduction.html
有疑问加站长微信联系(非本文作者)