Compare commits

..

54 Commits

Author SHA1 Message Date
mo3et ee4736d6d0 Update CHANGELOG for release v3.8.2 2024-11-25 10:39:40 +00:00
Monet Lee 181d304496 build: update Server version. (#2887)
* build: update Server version.

* update version.
2024-11-25 10:38:43 +00:00
icey-yu 2b8b8b3204 revert: write msg to redis (#2883) 2024-11-25 10:32:04 +00:00
icey-yu f0a60a71ae fix: webhookBeforeSendSingleMsg will call before black and friend check (#2885) 2024-11-25 06:12:03 +00:00
icey-yu 38cfa4a9c4 fix: webhookAfterSingleMsgRead (#2884) 2024-11-25 06:11:34 +00:00
Morya cbade46ae0 fix: minor log typo (#2881) 2024-11-25 02:49:31 +00:00
icey-yu bfc4684a36 fix: err (#2876) 2024-11-22 09:46:02 +00:00
icey-yu 2c612ba6d0 fix: del login Policy (#2825)
* fix: del login Policy

* feat: offline push

* feat: offline push

* fix: err

* fix: err
2024-11-22 07:49:16 +00:00
dependabot[bot] c2fd0061b0 build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (#2851)
Bumps [github.com/golang-jwt/jwt/v4](https://github.com/golang-jwt/jwt) from 4.5.0 to 4.5.1.
- [Release notes](https://github.com/golang-jwt/jwt/releases)
- [Changelog](https://github.com/golang-jwt/jwt/blob/main/VERSION_HISTORY.md)
- [Commits](https://github.com/golang-jwt/jwt/compare/v4.5.0...v4.5.1)

---
updated-dependencies:
- dependency-name: github.com/golang-jwt/jwt/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-11-22 07:48:47 +00:00
Monet Lee b17f4f15e8 fix: move workflow to correct path (#2837)
* build: implement version file update when release.

* build: move file to correct path.

* remove.
2024-11-22 07:06:28 +00:00
icey-yu 334749c5a2 feat: Print Panic Log (#2850)
* feat: catch panic

* feat: docker file

* feat: cicd

* feat: dockerfile

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
2024-11-22 07:05:39 +00:00
Sasaya bf88535800 fix: single chat offline push webhook (#2862) 2024-11-22 06:45:24 +00:00
Monet Lee a37eb5ea0d build: create changelog tool and workflows. (#2869) 2024-11-22 04:26:29 +00:00
icey-yu daba153d27 fix: admin token limit (#2871) 2024-11-22 04:25:28 +00:00
chao 15648cd44d fix: concurrent write to websocket connection (#2866)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: implement no gob encoder.

* update unitTest content.

* Update hub_server.go

* feat: GroupApplicationAgreeMemberEnterNotification

* fix: encoder replace to json encoder.

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* merge:  merge main code into js branch. (#2648)

* feat: update group notification when set to null. (#2590)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* feat: update group notification when set to null.

* update log standard.

* feat: add long time push msg in prometheus (#2584)

* feat: add long time push msg in prometheus

* fix: log print

* fix: go mod

* fix: log msg

* fix: log init

* feat: push msg

* feat: go mod ,remove cgo package

* feat: remove error log

* feat: test dummy push

* feat:redis pool config

* feat: push to kafka log

* feat: supports getting messages based on session ID and seq (#2582)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: implement request batch count limit. (#2591)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: getting messages based on session ID and seq (#2595)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: avoid pulling messages from sessions with a large number of max seq values of 0 (#2602)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: improve db structure in `storage/controller` (#2604)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* refactor: improve db structure in `storage/controller`

* feat: implement offline push using kafka (#2600)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: implement offline push.

* feat: implement batch Push spilt

* update go mod

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: update OfflinePushConsumerHandler.

* feat: API supports gzip (#2609)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Fix err (#2608)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: add rocksTimeout

* feat: wrap logs

* feat: add logs

* feat: listen config

* feat: enable listen TIME_WAIT port

* feat: add logs

* feat: cache batch

* chore: enable fullUserCache

* feat: push rpc num

* feat: push err

* feat: with operationID

* feat: sleep

* feat: change 1s

* feat: change log

* feat: implement Getbatch in rpcCache.

* feat: print getOnline cost

* feat: change log

* feat: change kafka and push config

* feat: del interface

* feat: fix err

* feat: change config

* feat: go mod

* feat: change config

* feat: change config

* feat: add sleep in push

* feat: warn logs

* feat: logs

* feat: logs

* feat: change port

* feat: start config

* feat: remove port reuse

* feat: prometheus config

* feat: prometheus config

* feat: prometheus config

* feat: add long time send msg to grafana

* feat: init

* feat: init

* feat: implement offline push.

* feat: batch get user online

* feat: implement batch Push spilt

* update go mod

* Revert "feat: change port"

This reverts commit 06d5e944

* feat: change port

* feat: change config

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: get all online users and init push

* feat: lock in online cache

* feat: config

* fix: init online status

* fix: add logs

* fix: userIDs

* fix: add logs

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: method name

* fix: update OfflinePushConsumerHandler.

* fix: prommetrics

* fix: add logs

* fix: ctx

* fix: log

* fix: config

* feat: change port

* fix: atomic online cache status

---------

Co-authored-by: Monet Lee <monet_lee@163.com>

* feature: add GetConversationsHasReadAndMaxSeq interface to the WebSocket API. (#2611)

* fix: lru lock (#2613)

* fix: lru lock

* fix: lru lock

* fix: lru lock

* fix: nil pointer error on close (#2618)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: create group can push notification (#2617)

* fix: blockage caused by listen error (#2620)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: go.mod (#2621)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: improve searchMsg implement. (#2614)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* remove unused script.

* feat: improve searchMsg implement.

* update mongo config.

* Fix lock (#2622)

* fix:log

* fix: lock

* fix: update setGroupInfoEX field name. (#2625)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name (#2626)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name

* feat: msg gateway add log (#2631)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: update setGroupInfoEx func name and field. (#2634)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEx func name and field.

* refactor: update groupinfoEx field.

* refactor: update database name in mongodb.yml

* add groupName Condition

* fix: fix setConversations req fill. (#2645)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: fix setConversations req fill.

* fix: GetMsgBySeqs boundary issues (#2647)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: the attribute version is obsolete, remove it (#2644)

* refactor: update Userregister request field. (#2650)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>

* update go mod

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* merge: update code from main to v3.8-js-sdk-only. (#2720)

* fix: fix update groupName invalid. (#2673)

* refactor: change platform to platformID (#2670)

* feat: don`t return nil data (#2675)

Co-authored-by: Monet Lee <monet_lee@163.com>

* refactor: update fields type in userStatus and check registered. (#2676)

* fix: usertoken auth. (#2677)

* refactor: update fields type in userStatus and check registered.

* fix: usertoken auth.

* update contents.

* update content.

* update

* fix

* update pb file.

* feat: add friend agree after callback (#2680)

* fix: sn not sort (#2682)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: add GetAdminToken interface. (#2684)

* refactor: add GetAdminToken interface.

* update config.

* fix: admin token (#2686)

* fix: update workflows logic. (#2688)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* fix: admin token (#2687)

* update the front image (#2692)

* update the front image

* update version

* feat: improve publish docker image workflows (#2697)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic. (#2700)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic.

* feat: Msg filter (#2703)

* feat: msg filter

* feat: msg filter

* feat: msg filter

* feat: provide the interface required by js sdk (#2712)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Line webhook (#2716)

* feat: online and offline webhook

* feat: online and offline webhook

* feat: remove zk

* fix: the message I sent is not set to read seq in mongodb (#2718)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: cannot modify group member avatars (#2719)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* merge

* merge: update code from main to v3.8-js-sdk-only.  (#2818)

* feat: implement merge milestone PR to target-branch. (#2796)

* build: improve workflows logic. (#2801)

* fix: improve time condition check mehtod. (#2804)

* fix: improve time condition check mehtod.

* fix

* fix: webhook before online push (#2805)

* fix: set own read seq in MongoDB when sender send a message. (#2808)

* fix: solve err Notification when setGroupInfo. (#2806)

* fix: solve err Notification when setGroupInfo.

* build: update checkout version.

* fix: update notification contents.

* Introducing OpenIM Guru on Gurubase.io (#2788)

* feat: support app update service (#2811)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: add ApplicationVersion

* feat: add ApplicationVersion

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: ApplicationVersion move chat (#2813)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: ApplicationVersion move chat

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: improve condition check. (#2815)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: support text ping pong

* feat: support text ping pong

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* fix: concurrent write to websocket connection

* fix: concurrent write to websocket connection

* fix: concurrent write to websocket connection

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: OpenIM-Gordon <46924906+FGadvancer@users.noreply.github.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
2024-11-21 09:35:39 +00:00
Wiky Lyu 1df49ee718 fix #2860 migrate jpns to jpush (#2861) 2024-11-19 01:59:21 +00:00
yoyoIU 8471a0ef3b fix(push): push content with jpush (#2844)
* fix(push): push content with jpush

* docs: fix push enable example value

* fix(push): jpush error response
2024-11-18 08:25:46 +00:00
Monet Lee e0284724b5 build: update mongo and kafka start logic. (#2858)
* build: update mongo and kafka start logic.

* build: update go version image in dockerfile.

* build: remove zookeeper image.

* add authSource comment.

* update tools version.

* add created sucess print.

* remove unused script.

* format.
2024-11-18 03:32:47 +00:00
chao 12790e141d feat: merge js sdk (#2856)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: implement no gob encoder.

* update unitTest content.

* Update hub_server.go

* feat: GroupApplicationAgreeMemberEnterNotification

* fix: encoder replace to json encoder.

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* merge:  merge main code into js branch. (#2648)

* feat: update group notification when set to null. (#2590)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* feat: update group notification when set to null.

* update log standard.

* feat: add long time push msg in prometheus (#2584)

* feat: add long time push msg in prometheus

* fix: log print

* fix: go mod

* fix: log msg

* fix: log init

* feat: push msg

* feat: go mod ,remove cgo package

* feat: remove error log

* feat: test dummy push

* feat:redis pool config

* feat: push to kafka log

* feat: supports getting messages based on session ID and seq (#2582)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: implement request batch count limit. (#2591)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: getting messages based on session ID and seq (#2595)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: avoid pulling messages from sessions with a large number of max seq values of 0 (#2602)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: improve db structure in `storage/controller` (#2604)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* refactor: improve db structure in `storage/controller`

* feat: implement offline push using kafka (#2600)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: implement offline push.

* feat: implement batch Push spilt

* update go mod

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: update OfflinePushConsumerHandler.

* feat: API supports gzip (#2609)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Fix err (#2608)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* feat: add rocksTimeout

* feat: wrap logs

* feat: add logs

* feat: listen config

* feat: enable listen TIME_WAIT port

* feat: add logs

* feat: cache batch

* chore: enable fullUserCache

* feat: push rpc num

* feat: push err

* feat: with operationID

* feat: sleep

* feat: change 1s

* feat: change log

* feat: implement Getbatch in rpcCache.

* feat: print getOnline cost

* feat: change log

* feat: change kafka and push config

* feat: del interface

* feat: fix err

* feat: change config

* feat: go mod

* feat: change config

* feat: change config

* feat: add sleep in push

* feat: warn logs

* feat: logs

* feat: logs

* feat: change port

* feat: start config

* feat: remove port reuse

* feat: prometheus config

* feat: prometheus config

* feat: prometheus config

* feat: add long time send msg to grafana

* feat: init

* feat: init

* feat: implement offline push.

* feat: batch get user online

* feat: implement batch Push spilt

* update go mod

* Revert "feat: change port"

This reverts commit 06d5e944

* feat: change port

* feat: change config

* feat: implement kafka producer and consumer.

* update format,

* add PushMQ log.

* feat: get all online users and init push

* feat: lock in online cache

* feat: config

* fix: init online status

* fix: add logs

* fix: userIDs

* fix: add logs

* feat: update Handler logic.

* update MQ logic.

* update

* update

* fix: method name

* fix: update OfflinePushConsumerHandler.

* fix: prommetrics

* fix: add logs

* fix: ctx

* fix: log

* fix: config

* feat: change port

* fix: atomic online cache status

---------

Co-authored-by: Monet Lee <monet_lee@163.com>

* feature: add GetConversationsHasReadAndMaxSeq interface to the WebSocket API. (#2611)

* fix: lru lock (#2613)

* fix: lru lock

* fix: lru lock

* fix: lru lock

* fix: nil pointer error on close (#2618)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: create group can push notification (#2617)

* fix: blockage caused by listen error (#2620)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: go.mod (#2621)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: improve searchMsg implement. (#2614)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* remove unused script.

* feat: improve searchMsg implement.

* update mongo config.

* Fix lock (#2622)

* fix:log

* fix: lock

* fix: update setGroupInfoEX field name. (#2625)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name (#2626)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEX field name.

* fix: update setGroupInfoEX field name

* feat: msg gateway add log (#2631)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: update setGroupInfoEx func name and field. (#2634)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: update setGroupInfoEx func name and field.

* refactor: update groupinfoEx field.

* refactor: update database name in mongodb.yml

* add groupName Condition

* fix: fix setConversations req fill. (#2645)

* refactor: refactor workflows contents.

* add tool workflows.

* update field.

* fix: remove chat error.

* Fix err.

* fix error.

* remove cn comment.

* update workflows files.

* update infra config.

* move workflows.

* feat: update bot.

* fix: solve uncorrect outdated msg get.

* update get docIDs logic.

* update

* update skip logic.

* fix

* update.

* fix: delay deleteObject func.

* remove unused content.

* update log type.

* feat: implement request batch count limit.

* update

* update

* fix: fix setConversations req fill.

* fix: GetMsgBySeqs boundary issues (#2647)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: the attribute version is obsolete, remove it (#2644)

* refactor: update Userregister request field. (#2650)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>

* update go mod

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* merge: update code from main to v3.8-js-sdk-only. (#2720)

* fix: fix update groupName invalid. (#2673)

* refactor: change platform to platformID (#2670)

* feat: don`t return nil data (#2675)

Co-authored-by: Monet Lee <monet_lee@163.com>

* refactor: update fields type in userStatus and check registered. (#2676)

* fix: usertoken auth. (#2677)

* refactor: update fields type in userStatus and check registered.

* fix: usertoken auth.

* update contents.

* update content.

* update

* fix

* update pb file.

* feat: add friend agree after callback (#2680)

* fix: sn not sort (#2682)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* refactor: add GetAdminToken interface. (#2684)

* refactor: add GetAdminToken interface.

* update config.

* fix: admin token (#2686)

* fix: update workflows logic. (#2688)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* fix: admin token (#2687)

* update the front image (#2692)

* update the front image

* update version

* feat: improve publish docker image workflows (#2697)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic. (#2700)

* refactor: add GetAdminToken interface.

* update config.

* update workflows logic.

* feat: improve publish docker image workflows

* update condition logic.

* fix: update load file logic.

* feat: Msg filter (#2703)

* feat: msg filter

* feat: msg filter

* feat: msg filter

* feat: provide the interface required by js sdk (#2712)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* Line webhook (#2716)

* feat: online and offline webhook

* feat: online and offline webhook

* feat: remove zk

* fix: the message I sent is not set to read seq in mongodb (#2718)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: cannot modify group member avatars (#2719)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* merge

* merge: update code from main to v3.8-js-sdk-only.  (#2818)

* feat: implement merge milestone PR to target-branch. (#2796)

* build: improve workflows logic. (#2801)

* fix: improve time condition check mehtod. (#2804)

* fix: improve time condition check mehtod.

* fix

* fix: webhook before online push (#2805)

* fix: set own read seq in MongoDB when sender send a message. (#2808)

* fix: solve err Notification when setGroupInfo. (#2806)

* fix: solve err Notification when setGroupInfo.

* build: update checkout version.

* fix: update notification contents.

* Introducing OpenIM Guru on Gurubase.io (#2788)

* feat: support app update service (#2811)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: add ApplicationVersion

* feat: add ApplicationVersion

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: ApplicationVersion move chat (#2813)

* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: ApplicationVersion move chat

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>

* fix: improve condition check. (#2815)

---------

Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
Co-authored-by: chao <48119764+withchao@users.noreply.github.com>
Co-authored-by: withchao <withchao@users.noreply.github.com>

* feat: support text ping pong

* feat: support text ping pong

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

* feat: gob json encoder

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
Co-authored-by: Monet Lee <monet_lee@163.com>
Co-authored-by: OpenIM-Gordon <46924906+FGadvancer@users.noreply.github.com>
Co-authored-by: icey-yu <119291641+icey-yu@users.noreply.github.com>
Co-authored-by: 蔡相跃 <caixiangyue007@gmail.com>
Co-authored-by: skiffer-git <72860476+skiffer-git@users.noreply.github.com>
Co-authored-by: Kürşat Aktaş <kursat.ce@gmail.com>
2024-11-14 06:35:18 +00:00
icey-yu 68698961f1 fix: SetConversations can update new conversation (#2838) 2024-11-07 10:11:43 +00:00
icey-yu ad8ae5f5f0 fix: get group return repeated result (#2842) 2024-11-07 06:59:31 +00:00
icey-yu 3ebd1bbaa4 fix: write msg to redis (#2836) 2024-11-05 07:33:15 +00:00
chao 8d308a4163 feat: support stream message (#2824)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: stream msg

* feat: support stream messages

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-11-04 08:40:39 +00:00
Monet Lee 1516e9cb3a build: implement version file update when release. (#2826) 2024-11-04 06:50:07 +00:00
icey-yu 41d5688de6 Update login policy (#2822)
* fix: login Policy

* fix: del login Policy
2024-11-01 06:49:31 +00:00
Monet Lee 269dd7180f fix: improve condition check. (#2815) 2024-10-31 03:21:14 +00:00
chao 61fc9bffec feat: ApplicationVersion move chat (#2813)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: ApplicationVersion move chat

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-30 09:12:33 +00:00
chao ea477fd91f feat: support app update service (#2811)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

* feat: add ApplicationVersion

* feat: add ApplicationVersion

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-30 06:38:09 +00:00
Kürşat Aktaş b85e36e97c Introducing OpenIM Guru on Gurubase.io (#2788) 2024-10-30 03:39:17 +00:00
Monet Lee 5af41e83ce fix: solve err Notification when setGroupInfo. (#2806)
* fix: solve err Notification when setGroupInfo.

* build: update checkout version.

* fix: update notification contents.
2024-10-30 02:07:20 +00:00
OpenIM-Gordon 07d30a645b fix: set own read seq in MongoDB when sender send a message. (#2808) 2024-10-30 02:02:03 +00:00
icey-yu a4eae35118 fix: webhook before online push (#2805) 2024-10-29 04:29:25 +00:00
Monet Lee 53cf2c0540 fix: improve time condition check mehtod. (#2804)
* fix: improve time condition check mehtod.

* fix
2024-10-29 02:42:04 +00:00
Monet Lee 4de3befd60 build: improve workflows logic. (#2801) 2024-10-28 06:05:45 +00:00
Monet Lee 4a8abfa21d feat: implement merge milestone PR to target-branch. (#2796) 2024-10-25 10:12:29 +00:00
chao 649250b7c6 feat: support app update service (#2794)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

* feat: add ApplicationVersion

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-25 08:50:58 +00:00
Monet Lee 312c8ba9d6 fix: improve getUserInfo logic. (#2792)
* fix: improve ConversationATInfo logic.

* fix logic err.

* fix: improve transfer Owner logic when newOwner is mute.

* fix: improve getUserInfo logic.
2024-10-25 08:23:21 +00:00
Monet Lee b36b69506f fix: improve transfer Owner logic when newOwner is mute. (#2790)
* fix: improve ConversationATInfo logic.

* fix logic err.

* fix: improve transfer Owner logic when newOwner is mute.
2024-10-25 06:25:56 +00:00
icey-yu 4be508a640 Revert: Change group member roleLevel can`t send notification (#2789)
* fix: change group member info send notification

* fix: change group member info send notification

* fix: group

* fix: group

* fix: group
2024-10-25 03:58:09 +00:00
Monet Lee 0207f1dab1 fix: improve setConversationAtInfo logic. (#2782)
* fix: improve ConversationATInfo logic.

* fix logic err.
2024-10-25 02:34:17 +00:00
Alilestera bbac036d6c chore: remove unused .chglog and unnecessary content in goreleaser (#2786) 2024-10-25 02:31:43 +00:00
OpenIM-Gordon 9baf1ffe82 fix: del UserB's conversation version cache when userA set conversation's isPrivateChat to true. (#2785) 2024-10-24 12:59:56 +00:00
chao b5ef71f5c2 fix: client sends message status error to server (#2779)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

* fix: MsgData status

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-24 09:33:31 +00:00
icey-yu 43919bc5fe fix: change group member info send notification (#2777) 2024-10-24 08:48:46 +00:00
icey-yu 0d03b46ac8 feat: change push config (#2775) 2024-10-24 08:47:06 +00:00
Alilestera a84f7bd217 fix: joinSource check args error. (#2773)
Co-authored-by: Monet Lee <monet_lee@163.com>
2024-10-24 07:21:51 +00:00
icey-yu 9e8a389698 feat: Add More Multi Login Policy (#2770)
* feat: multiLogin

* feat: change config
2024-10-24 07:02:44 +00:00
icey-yu a2110e416a fix: group level change logic (#2730) 2024-10-24 06:59:33 +00:00
chao 0b612c13c6 fix: join the group chat directly, notification type error (#2772)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

* fix: MemberEnterNotification

* fix: MemberEnterNotification

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-23 09:59:37 +00:00
liangkai e7c7bf3bd1 fix: auth package import twice (#2724) 2024-10-15 06:47:34 +00:00
chao 3167f9943f fix: cannot modify group member avatars (#2719)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

* fix: cannot modify group member avatars

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-14 03:43:51 +00:00
chao 598750e8c7 fix: the message I sent is not set to read seq in mongodb (#2718)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

* fix: the message I sent is not set to read seq in mongodb

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-14 03:08:00 +00:00
icey-yu 87f79d3cee Line webhook (#2716)
* feat: online and offline webhook

* feat: online and offline webhook

* feat: remove zk
2024-10-14 03:02:56 +00:00
chao 7f44319b9b feat: provide the interface required by js sdk (#2712)
* fix: GroupApplicationAcceptedNotification

* fix: GroupApplicationAcceptedNotification

* fix: NotificationUserInfoUpdate

* cicd: robot automated Change

* fix: component

* fix: getConversationInfo

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* feat: cron task

* fix: minio config url recognition error

* update gomake version

* update gomake version

* fix: seq conversion bug

* fix: redis pipe exec

* fix: ImportFriends

* fix: A large number of logs keysAndValues ​​length is not even

* feat: mark read aggregate write

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* feat: online status supports redis cluster

* merge

* merge

* read seq is written to mongo

* read seq is written to mongo

* fix: invitation to join group notification

* fix: friend op_user_id

* feat: optimizing asynchronous context

* feat: optimizing memamq size

* feat: add GetSeqMessage

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: GroupApplicationAgreeMemberEnterNotification

* feat: go.mod

* feat: go.mod

* feat: join group notification and get seq

* feat: join group notification and get seq

* feat: avoid pulling messages from sessions with a large number of max seq values of 0

* feat: API supports gzip

* go.mod

* fix: nil pointer error on close

* fix: listen error

* fix: listen error

* update go.mod

* feat: add log

* fix: token parse token value

* fix: GetMsgBySeqs boundary issues

* fix: sn_ not sort

* fix: sn_ not sort

* fix: sn_ not sort

* fix: jssdk add

* fix: jssdk support

* fix: jssdk support

* fix: jssdk support

---------

Co-authored-by: withchao <withchao@users.noreply.github.com>
2024-10-14 01:31:44 +00:00
181 changed files with 4338 additions and 6375 deletions
+4 -12
View File
@@ -6,20 +6,12 @@ ETCD_IMAGE=quay.io/coreos/etcd:v3.5.13
PROMETHEUS_IMAGE=prom/prometheus:v2.45.6
ALERTMANAGER_IMAGE=prom/alertmanager:v0.27.0
GRAFANA_IMAGE=grafana/grafana:11.0.1
NODE_EXPORTER_IMAGE=prom/node-exporter:v1.7.0
OPENIM_WEB_FRONT_IMAGE=openim/openim-web-front:release-v3.8.3
OPENIM_ADMIN_FRONT_IMAGE=openim/openim-admin-front:release-v1.8.4
OPENIM_WEB_FRONT_IMAGE=openim/openim-web-front:release-v3.8.1
OPENIM_ADMIN_FRONT_IMAGE=openim/openim-admin-front:release-v1.8.2
#FRONT_IMAGE: use aliyun images
#OPENIM_WEB_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-web-front:release-v3.8.3
#OPENIM_ADMIN_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-admin-front:release-v1.8.4
#OPENIM_WEB_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-web-front:release-v3.8.1
#OPENIM_ADMIN_FRONT_IMAGE=registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-admin-front:release-v1.8.2
DATA_DIR=./
MONGO_BACKUP_DIR=${DATA_DIR}components/backup/mongo/
PROMETHEUS_PORT=19091
ALERTMANAGER_PORT=19093
GRAFANA_PORT=13000
NODE_EXPORTER_PORT=19100
@@ -1,91 +0,0 @@
name: Build and release services Images
on:
push:
branches:
- release-*
release:
types: [published]
workflow_dispatch:
inputs:
tag:
description: "Tag version to be used for Docker image"
required: true
default: "v3.8.3"
jobs:
build-and-push:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Log in to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Log in to Aliyun Container Registry
uses: docker/login-action@v2
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALIREGISTRY_USERNAME }}
password: ${{ secrets.ALIREGISTRY_TOKEN }}
- name: Extract metadata for Docker (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
tags: |
type=ref,event=tag
type=schedule
type=ref,event=branch
type=semver,pattern={{version}}
type=semver,pattern=v{{version}}
# type=semver,pattern={{major}}.{{minor}}
type=semver,pattern=release-{{raw}}
type=sha
type=raw,value=${{ github.event.inputs.tag }}
- name: Build and push Docker images
run: |
ROOT_DIR="build/images"
for dir in "$ROOT_DIR"/*/; do
# Find Dockerfile or *.dockerfile in a case-insensitive manner
dockerfile=$(find "$dir" -maxdepth 1 -type f \( -iname 'dockerfile' -o -iname '*.dockerfile' \) | head -n 1)
if [ -n "$dockerfile" ] && [ -f "$dockerfile" ]; then
IMAGE_NAME=$(basename "$dir")
echo "Building Docker image for $IMAGE_NAME with tags:"
# Initialize tag arguments
tag_args=()
# Read each tag and append --tag arguments
while IFS= read -r tag; do
tag_args+=(--tag "${{ secrets.DOCKER_USERNAME }}/$IMAGE_NAME:$tag")
tag_args+=(--tag "ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME:$tag")
tag_args+=(--tag "registry.cn-hangzhou.aliyuncs.com/openimsdk/$IMAGE_NAME:$tag")
done <<< "${{ steps.meta.outputs.tags }}"
# Build and push the Docker image with all tags
docker buildx build --platform linux/amd64,linux/arm64 \
--file "$dockerfile" \
"${tag_args[@]}" \
--push "$dir"
else
echo "No valid Dockerfile found in $dir"
fi
done
+75
View File
@@ -0,0 +1,75 @@
## [v3.8.2](https://github.com/openimsdk/open-im-server/releases/tag/v3.8.2) (2024-11-25)
### New Features
* feat: improve publish docker image workflows [#2697](https://github.com/openimsdk/open-im-server/pull/2697)
* feat: Msg filter [#2703](https://github.com/openimsdk/open-im-server/pull/2703)
* feat: provide the interface required [#2712](https://github.com/openimsdk/open-im-server/pull/2712)
* feat: add webhooks of online status and remove zookeeper configuration. [#2716](https://github.com/openimsdk/open-im-server/pull/2716)
* feat: Add More Multi Login Policy [#2770](https://github.com/openimsdk/open-im-server/pull/2770)
* feat: Push configuration can ignore case sensitivity [#2775](https://github.com/openimsdk/open-im-server/pull/2775)
* feat: support app update service [#2794](https://github.com/openimsdk/open-im-server/pull/2794)
* feat: implement merge milestone PR to target-branch. [#2796](https://github.com/openimsdk/open-im-server/pull/2796)
* feat: support app update service [#2811](https://github.com/openimsdk/open-im-server/pull/2811)
* feat: ApplicationVersion move chat [#2813](https://github.com/openimsdk/open-im-server/pull/2813)
* feat: Update login policy [#2822](https://github.com/openimsdk/open-im-server/pull/2822)
* feat: merge js sdk [#2856](https://github.com/openimsdk/open-im-server/pull/2856)
* feat: Print Panic Log [#2850](https://github.com/openimsdk/open-im-server/pull/2850)
### Bug Fixes
* fix: update load file logic. [#2700](https://github.com/openimsdk/open-im-server/pull/2700)
* fix: the message I sent is not set to read seq in mongodb [#2718](https://github.com/openimsdk/open-im-server/pull/2718)
* fix: cannot modify group member avatars [#2719](https://github.com/openimsdk/open-im-server/pull/2719)
* fix: auth package import twice [#2724](https://github.com/openimsdk/open-im-server/pull/2724)
* fix: join the group chat directly, notification type error [#2772](https://github.com/openimsdk/open-im-server/pull/2772)
* fix: change update group member level logic [#2730](https://github.com/openimsdk/open-im-server/pull/2730)
* fix: joinSource check args error. [#2773](https://github.com/openimsdk/open-im-server/pull/2773)
* fix: Change group member roleLevel can`t send notification [#2777](https://github.com/openimsdk/open-im-server/pull/2777)
* fix: client sends message status error to server [#2779](https://github.com/openimsdk/open-im-server/pull/2779)
* fix: del UserB's conversation version cache when userA set conversati… [#2785](https://github.com/openimsdk/open-im-server/pull/2785)
* fix: improve setConversationAtInfo logic. [#2782](https://github.com/openimsdk/open-im-server/pull/2782)
* fix: improve transfer Owner logic when newOwner is mute. [#2790](https://github.com/openimsdk/open-im-server/pull/2790)
* fix: improve getUserInfo logic. [#2792](https://github.com/openimsdk/open-im-server/pull/2792)
* fix: improve time condition check mehtod. [#2804](https://github.com/openimsdk/open-im-server/pull/2804)
* fix: webhook before online push [#2805](https://github.com/openimsdk/open-im-server/pull/2805)
* fix: set own read seq in MongoDB when sender send a message. [#2808](https://github.com/openimsdk/open-im-server/pull/2808)
* fix: solve err Notification when setGroupInfo. [#2806](https://github.com/openimsdk/open-im-server/pull/2806)
* fix: improve condition check. [#2815](https://github.com/openimsdk/open-im-server/pull/2815)
* fix: Write back message to Redis [#2836](https://github.com/openimsdk/open-im-server/pull/2836)
* fix: get group return repeated result [#2842](https://github.com/openimsdk/open-im-server/pull/2842)
* fix: SetConversations can update new conversation [#2838](https://github.com/openimsdk/open-im-server/pull/2838)
* fix(push): push content with jpush [#2844](https://github.com/openimsdk/open-im-server/pull/2844)
* fix #2860 migrate jpns to jpush [#2861](https://github.com/openimsdk/open-im-server/pull/2861)
* fix: concurrent write to websocket connection [#2866](https://github.com/openimsdk/open-im-server/pull/2866)
* fix: Remove admin token in redis [#2871](https://github.com/openimsdk/open-im-server/pull/2871)
* Fix Push2User webhookBeforeOfflinePush [#2862](https://github.com/openimsdk/open-im-server/pull/2862)
* fix: move workflow to correct path [#2837](https://github.com/openimsdk/open-im-server/pull/2837)
* fix: del login Policy [#2825](https://github.com/openimsdk/open-im-server/pull/2825)
* fix: Wrong Redis Error Check [#2876](https://github.com/openimsdk/open-im-server/pull/2876)
* fix: minor log typo [#2881](https://github.com/openimsdk/open-im-server/pull/2881)
* fix: webhookAfterSingleMsgRead will be called correctly [#2884](https://github.com/openimsdk/open-im-server/pull/2884)
* fix: webhookBeforeSendSingleMsg will call before black and friend check [#2885](https://github.com/openimsdk/open-im-server/pull/2885)
### Chores
* chore: remove unused content [#2786](https://github.com/openimsdk/open-im-server/pull/2786)
### Builds
* build: improve workflows logic. [#2801](https://github.com/openimsdk/open-im-server/pull/2801)
* build: implement version file update when release. [#2826](https://github.com/openimsdk/open-im-server/pull/2826)
* build: update mongo and kafka start logic. [#2858](https://github.com/openimsdk/open-im-server/pull/2858)
* build: create changelog tool and workflows. [#2869](https://github.com/openimsdk/open-im-server/pull/2869)
* build(deps): bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 [#2851](https://github.com/openimsdk/open-im-server/pull/2851)
### Others
* Revert: Change group member roleLevel can`t send notification [#2789](https://github.com/openimsdk/open-im-server/pull/2789)
* Introducing OpenIM Guru on Gurubase.io [#2788](https://github.com/openimsdk/open-im-server/pull/2788)
* revert: write msg to redis [#2883](https://github.com/openimsdk/open-im-server/pull/2883)
* @lkzz made their first contribution in https://github.com/openimsdk/open-im-server/pull/2724
[#2724](https://github.com/openimsdk/open-im-server/pull/2724)
* @alilestera made their first contribution in https://github.com/openimsdk/open-im-server/pull/2773
[#2773](https://github.com/openimsdk/open-im-server/pull/2773)
* @kursataktas made their first contribution in https://github.com/openimsdk/open-im-server/pull/2788
[#2788](https://github.com/openimsdk/open-im-server/pull/2788)
* @yoyo930021 made their first contribution in https://github.com/openimsdk/open-im-server/pull/2844
[#2844](https://github.com/openimsdk/open-im-server/pull/2844)
* @wikylyu made their first contribution in https://github.com/openimsdk/open-im-server/pull/2861
[#2861](https://github.com/openimsdk/open-im-server/pull/2861)
+4
View File
@@ -0,0 +1,4 @@
# CHANGELOGs
- [CHANGELOG-3.8.md](./CHANGELOG-3.8.md)
+1 -1
View File
@@ -8,7 +8,7 @@ ENV SERVER_DIR=/openim-server
WORKDIR $SERVER_DIR
# Set the Go proxy to improve dependency resolution speed
# ENV GOPROXY=https://goproxy.io,direct
ENV GOPROXY=https://goproxy.io,direct
# Copy all files from the current directory into the container
COPY . .
+1
View File
@@ -16,6 +16,7 @@
[![Best Practices](https://img.shields.io/badge/Best%20Practices-purple?style=for-the-badge)](https://www.bestpractices.dev/projects/8045)
[![Good First Issues](https://img.shields.io/github/issues/openimsdk/open-im-server/good%20first%20issue?style=for-the-badge&logo=github)](https://github.com/openimsdk/open-im-server/issues?q=is%3Aissue+is%3Aopen+sort%3Aupdated-desc+label%3A%22good+first+issue%22)
[![Language](https://img.shields.io/badge/Language-Go-blue.svg?style=for-the-badge&logo=go&logoColor=white)](https://golang.org/)
[![Gurubase](https://img.shields.io/badge/Gurubase-Ask%20OpenIM%20Guru-006BFF?style=for-the-badge)](https://gurubase.io/g/openim)
<p align="center">
-3
View File
@@ -10,10 +10,7 @@ api:
prometheus:
# Whether to enable prometheus
enable: true
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# Prometheus listening ports, must match the number of api.ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 12002 ]
# This address can be accessed via a browser
grafanaURL: http://127.0.0.1:13000/
+1 -2
View File
@@ -1,4 +1,3 @@
cronExecuteTime: 0 2 * * *
retainChatRecords: 365
fileExpireTime: 180
deleteObjectType: ["msg-picture","msg-file", "msg-voice","msg-video","msg-video-snapshot","sdklog"]
fileExpireTime: 90
+1 -5
View File
@@ -1,17 +1,13 @@
rpc:
# The IP address where this RPC service registers itself; if left blank, it defaults to the internal network IP
registerIP:
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
registerIP:
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10140, 10141, 10142, 10143, 10144, 10145, 10146, 10147, 10148, 10149, 10150, 10151, 10152, 10153, 10154, 10155 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12140, 12141, 12142, 12143, 12144, 12145, 12146, 12147, 12148, 12149, 12150, 12151, 12152, 12153, 12154, 12155 ]
# IP address that the RPC/WebSocket service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
+1 -3
View File
@@ -1,8 +1,6 @@
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that Prometheus listens on; each port corresponds to an instance of monitoring. Ensure these are managed accordingly
# It will only take effect when autoSetPorts is set to false.
# Because four instances have been launched, four ports need to be specified
ports: [ 12020, 12021, 12022, 12023, 12024, 12025, 12026, 12027, 12028, 12029, 12030, 12031, 12032, 12033, 12034, 12035 ]
+2 -6
View File
@@ -3,22 +3,18 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10170, 10171, 10172, 10173, 10174, 10175, 10176, 10177, 10178, 10179, 10180, 10181, 10182, 10183, 10184, 10185 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12170, 12171, 12172, 12173, 12174, 12175, 12176, 12177, 12178, 12179, 12180, 12182, 12183, 12184, 12185, 12186 ]
maxConcurrentWorkers: 3
#Use geTui for offline push notifications, or choose fcm or jpush; corresponding configuration settings must be specified.
enable: geTui
#Use geTui for offline push notifications, or choose fcm or jpns; corresponding configuration settings must be specified.
enable:
geTui:
pushUrl: https://restapi.getui.com/v2/$appId
masterSecret:
-4
View File
@@ -3,17 +3,13 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10200 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12200 ]
tokenPolicy:
-4
View File
@@ -3,15 +3,11 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10220 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12220 ]
-4
View File
@@ -3,15 +3,11 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10240 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12240 ]
-4
View File
@@ -3,17 +3,13 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10260 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12260 ]
-4
View File
@@ -3,17 +3,13 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10280 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12280 ]
-11
View File
@@ -3,17 +3,13 @@ rpc:
registerIP:
# IP address that the RPC service listens on; setting to 0.0.0.0 listens on both internal and external IPs. If left blank, it automatically uses the internal network IP
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 10300 ]
prometheus:
# Enable or disable Prometheus monitoring
enable: true
# List of ports that Prometheus listens on; these must match the number of rpc.ports to ensure correct monitoring setup
# It will only take effect when autoSetPorts is set to false.
ports: [ 12300 ]
@@ -42,10 +38,3 @@ object:
accessKeySecret:
sessionToken:
publicRead: false
aws:
region: ap-southeast-2
bucket: testdemo832234
accessKeyID:
secretAccessKey:
sessionToken:
publicRead: false
+1 -5
View File
@@ -3,15 +3,11 @@ rpc:
registerIP:
# Listening IP; 0.0.0.0 means both internal and external IPs are listened to, if blank, the internal network IP is automatically obtained by default
listenIP: 0.0.0.0
# autoSetPorts indicates whether to automatically set the ports
autoSetPorts: true
# List of ports that the RPC service listens on; configuring multiple ports will launch multiple instances. These must match the number of configured prometheus ports
# It will only take effect when autoSetPorts is set to false.
# Listening ports; if multiple are configured, multiple instances will be launched, and must be consistent with the number of prometheus.ports
ports: [ 10320 ]
prometheus:
# Whether to enable prometheus
enable: true
# Prometheus listening ports, must be consistent with the number of rpc.ports
# It will only take effect when autoSetPorts is set to false.
ports: [ 12320 ]
+49 -82
View File
@@ -8,7 +8,7 @@ global:
alerting:
alertmanagers:
- static_configs:
- targets: [127.0.0.1:19093]
- targets: [internal_ip:19093]
# Load rules once and periodically evaluate them according to the global evaluation_interval.
rule_files:
@@ -25,95 +25,62 @@ scrape_configs:
# prometheus fetches application services
- job_name: node_exporter
static_configs:
- targets: [ 127.0.0.1:19100 ]
- targets: [ internal_ip:20500 ]
- job_name: openimserver-openim-api
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/api"
# static_configs:
# - targets: [ 127.0.0.1:12002 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12002 ]
labels:
namespace: default
- job_name: openimserver-openim-msggateway
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/msg_gateway"
# static_configs:
# - targets: [ 127.0.0.1:12140 ]
# # - targets: [ 127.0.0.1:12140, 127.0.0.1:12141, 127.0.0.1:12142, 127.0.0.1:12143, 127.0.0.1:12144, 127.0.0.1:12145, 127.0.0.1:12146, 127.0.0.1:12147, 127.0.0.1:12148, 127.0.0.1:12149, 127.0.0.1:12150, 127.0.0.1:12151, 127.0.0.1:12152, 127.0.0.1:12153, 127.0.0.1:12154, 127.0.0.1:12155 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12140 ]
# - targets: [ internal_ip:12140, internal_ip:12141, internal_ip:12142, internal_ip:12143, internal_ip:12144, internal_ip:12145, internal_ip:12146, internal_ip:12147, internal_ip:12148, internal_ip:12149, internal_ip:12150, internal_ip:12151, internal_ip:12152, internal_ip:12153, internal_ip:12154, internal_ip:12155 ]
labels:
namespace: default
- job_name: openimserver-openim-msgtransfer
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/msg_transfer"
# static_configs:
# - targets: [ 127.0.0.1:12020, 127.0.0.1:12021, 127.0.0.1:12022, 127.0.0.1:12023, 127.0.0.1:12024, 127.0.0.1:12025, 127.0.0.1:12026, 127.0.0.1:12027 ]
# # - targets: [ 127.0.0.1:12020, 127.0.0.1:12021, 127.0.0.1:12022, 127.0.0.1:12023, 127.0.0.1:12024, 127.0.0.1:12025, 127.0.0.1:12026, 127.0.0.1:12027, 127.0.0.1:12028, 127.0.0.1:12029, 127.0.0.1:12030, 127.0.0.1:12031, 127.0.0.1:12032, 127.0.0.1:12033, 127.0.0.1:12034, 127.0.0.1:12035 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12020, internal_ip:12021, internal_ip:12022, internal_ip:12023, internal_ip:12024, internal_ip:12025, internal_ip:12026, internal_ip:12027 ]
# - targets: [ internal_ip:12020, internal_ip:12021, internal_ip:12022, internal_ip:12023, internal_ip:12024, internal_ip:12025, internal_ip:12026, internal_ip:12027, internal_ip:12028, internal_ip:12029, internal_ip:12030, internal_ip:12031, internal_ip:12032, internal_ip:12033, internal_ip:12034, internal_ip:12035 ]
labels:
namespace: default
- job_name: openimserver-openim-push
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/push"
# static_configs:
# - targets: [ 127.0.0.1:12170, 127.0.0.1:12171, 127.0.0.1:12172, 127.0.0.1:12173, 127.0.0.1:12174, 127.0.0.1:12175, 127.0.0.1:12176, 127.0.0.1:12177 ]
## - targets: [ 127.0.0.1:12170, 127.0.0.1:12171, 127.0.0.1:12172, 127.0.0.1:12173, 127.0.0.1:12174, 127.0.0.1:12175, 127.0.0.1:12176, 127.0.0.1:12177, 127.0.0.1:12178, 127.0.0.1:12179, 127.0.0.1:12180, 127.0.0.1:12182, 127.0.0.1:12183, 127.0.0.1:12184, 127.0.0.1:12185, 127.0.0.1:12186 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12170, internal_ip:12171, internal_ip:12172, internal_ip:12173, internal_ip:12174, internal_ip:12175, internal_ip:12176, internal_ip:12177 ]
# - targets: [ internal_ip:12170, internal_ip:12171, internal_ip:12172, internal_ip:12173, internal_ip:12174, internal_ip:12175, internal_ip:12176, internal_ip:12177, internal_ip:12178, internal_ip:12179, internal_ip:12180, internal_ip:12182, internal_ip:12183, internal_ip:12184, internal_ip:12185, internal_ip:12186 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-auth
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/auth"
# static_configs:
# - targets: [ 127.0.0.1:12200 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12200 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-conversation
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/conversation"
# static_configs:
# - targets: [ 127.0.0.1:12220 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12220 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-friend
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/friend"
# static_configs:
# - targets: [ 127.0.0.1:12240 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12240 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-group
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/group"
# static_configs:
# - targets: [ 127.0.0.1:12260 ]
# labels:
# namespace: default.
static_configs:
- targets: [ internal_ip:12260 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-msg
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/msg"
# static_configs:
# - targets: [ 127.0.0.1:12280 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12280 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-third
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/third"
# static_configs:
# - targets: [ 127.0.0.1:12300 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12300 ]
labels:
namespace: default
- job_name: openimserver-openim-rpc-user
http_sd_configs:
- url: "http://127.0.0.1:10002/prometheus_discovery/user"
# static_configs:
# - targets: [ 127.0.0.1:12320 ]
# labels:
# namespace: default
static_configs:
- targets: [ internal_ip:12320 ]
labels:
namespace: default
+138 -151
View File
@@ -1,188 +1,175 @@
# Kubernetes Deployment
# OpenIM Application Containerization Deployment Guide
## Resource Requests
OpenIM supports a variety of cluster deployment methods, including but not limited to `helm`, `sealos`, `kustomize`
- CPU: 2 cores
- Memory: 4 GiB
- Disk usage: 20 GiB (on Node)
Various contributors, as well as previous official releases, have provided some referenceable solutions:
## Preconditions
+ [k8s-jenkins Repository](https://github.com/OpenIMSDK/k8s-jenkins)
+ [open-im-server-k8s-deploy Repository](https://github.com/openimsdk/open-im-server-k8s-deploy)
+ [openim-charts Repository](https://github.com/OpenIMSDK/openim-charts)
+ [deploy-openim Repository](https://github.com/showurl/deploy-openim)
ensure that you have already deployed the following components:
### Dependency Check
- Redis
- MongoDB
- Kafka
- MinIO
```bash
Kubernetes: >= 1.16.0-0
Helm: >= 3.0
```
## Origin Deploy
### Minimum Configuration
### Enter the target dir
`cd ./deployments/deploy/`
### Deploy configs and dependencies
Upate your configMap `openim-config.yml`. **You can check the official docs for more details.**
In `openim-config.yml`, you need modify the following configurations:
**discovery.yml**
- `kubernetes.namespace`: default is `default`, you can change it to your namespace.
**mongodb.yml**
- `address`: set to your already mongodb address or mongo Service name and port in your deployed.
- `database`: set to your mongodb database name.(Need have a created database.)
- `authSource`: set to your mongodb authSource. (authSource is specify the database name associated with the user's credentials, user need create in this database.)
**kafka.yml**
- `address`: set to your already kafka address or kafka Service name and port in your deployed.
**redis.yml**
- `address`: set to your already redis address or redis Service name and port in your deployed.
**minio.yml**
- `internalAddress`: set to your minio Service name and port in your deployed.
- `externalAddress`: set to your already expose minio external address.
### Set the secret
A Secret is an object that contains a small amount of sensitive data. Such as password and secret. Secret is similar to ConfigMaps.
#### Redis:
Update the `redis-password` value in `redis-secret.yml` to your Redis password encoded in base64.
The recommended minimum configuration for a production environment is as follows:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: openim-redis-secret
type: Opaque
data:
redis-password: b3BlbklNMTIz # update to your redis password encoded in base64, if need empty, you can set to ""
CPU: 4
Memory: 8G
Disk: 100G
```
#### Mongo:
## Configuration File Generation
Update the `mongo_openim_username`, `mongo_openim_password` value in `mongo-secret.yml` to your Mongo username and password encoded in base64.
We have automated all the files, making the generation of configuration files optional for OpenIM. However, if you desire custom configurations, you can follow the steps below:
```yaml
apiVersion: v1
kind: Secret
metadata:
name: openim-mongo-secret
type: Opaque
data:
mongo_openim_username: b3BlbklN # update to your mongo username encoded in base64, if need empty, you can set to "" (this user credentials need in authSource database).
mongo_openim_password: b3BlbklNMTIz # update to your mongo password encoded in base64, if need empty, you can set to ""
```bash
$ make init
# Alternatively, use script:
# ./scripts/init-config.sh
```
#### Minio:
At this point, configuration files will be generated under `deployments/openim/config`, which you can modify as per your requirements.
Update the `minio-root-user` and `minio-root-password` value in `minio-secret.yml` to your MinIO accessKeyID and secretAccessKey encoded in base64.
## Cluster Setup
```yaml
apiVersion: v1
kind: Secret
metadata:
name: openim-minio-secret
type: Opaque
data:
minio-root-user: cm9vdA== # update to your minio accessKeyID encoded in base64, if need empty, you can set to ""
minio-root-password: b3BlbklNMTIz # update to your minio secretAccessKey encoded in base64, if need empty, you can set to ""
If you already have a `kubernetes` cluster, or if you wish to build a `kubernetes` cluster from scratch, you can skip this step.
For a quick start, I used [sealos](https://github.com/labring/sealos) to rapidly set up the cluster, with sealos also being a wrapper for kubeadm at its core:
```bash
$ SEALOS_VERSION=`curl -s https://api.github.com/repos/labring/sealos/releases/latest | grep -oE '"tag_name": "[^"]+"' | head -n1 | cut -d'"' -f4` && \
curl -sfL https://raw.githubusercontent.com/labring/sealos/${SEALOS_VERSION}/scripts/install.sh |
sh -s ${SEALOS_VERSION} labring/sealos
```
#### Kafka:
**Supported Versions:**
Update the `kafka-password` value in `kafka-secret.yml` to your Kafka password encoded in base64.
+ docker: `labring/kubernetes-docker`:(v1.24.0~v1.27.0)
+ containerd: `labring/kubernetes`:(v1.24.0~v1.27.0)
```yaml
apiVersion: v1
kind: Secret
metadata:
name: openim-kafka-secret
type: Opaque
data:
kafka-password: b3BlbklNMTIz # update to your kafka password encoded in base64, if need empty, you can set to ""
#### Cluster Installation:
Cluster details are as follows:
| Hostname | IP Address | System Info |
| -------- | ---------- | ------------------------------------------------------------ |
| master01 | 10.0.0.9 | `Linux VM-0-9-ubuntu 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux` |
| node01 | 10.0.0.4 | Similar to master01 |
| node02 | 10.0.0.10 | Similar to master01 |
```bash
$ export CLUSTER_USERNAME=ubuntu
$ export CLUSTER_PASSWORD=123456
$ sudo sealos run labring/kubernetes:v1.25.0 labring/helm:v3.8.2 labring/calico:v3.24.1 \
--masters 10.0.0.9 \
--nodes 10.0.0.4,10.0.0.10 \
-u "$CLUSTER_USERNAME" \
-p "$CLUSTER_PASSWORD"
```
### Apply the secret.
```shell
kubectl apply -f redis-secret.yml -f minio-secret.yml -f mongo-secret.yml -f kafka-secret.yml
```
### Apply all config
`kubectl apply -f ./openim-config.yml`
> Attation: If you use `default` namespace, you can excute `clusterRile.yml` to create a cluster role binding for default service account.
> **Node** Uninstallation method: using `kubeadm` for uninstallation does not remove `etcd` and `cni` related configurations. Manual clearance or using `sealos` for uninstallation is needed.
>
> Namespace is modify to `discovery.yml` in `openim-config.yml`, you can change `kubernetes.namespace` to your namespace.
> ```bash
> $ sealos reset
> ```
**Excute `clusterRole.yml`**
`kubectl apply -f ./clusterRole.yml`
### run all deployments and services
> Note: Ensure that infrastructure services like MinIO, Redis, and Kafka are running before deploying the main applications.
If you are local, you can also use Kind and Minikube to test, for example, using Kind:
```bash
kubectl apply \
-f openim-api-deployment.yml \
-f openim-api-service.yml \
-f openim-crontask-deployment.yml \
-f openim-rpc-user-deployment.yml \
-f openim-rpc-user-service.yml \
-f openim-msggateway-deployment.yml \
-f openim-msggateway-service.yml \
-f openim-push-deployment.yml \
-f openim-push-service.yml \
-f openim-msgtransfer-service.yml \
-f openim-msgtransfer-deployment.yml \
-f openim-rpc-conversation-deployment.yml \
-f openim-rpc-conversation-service.yml \
-f openim-rpc-auth-deployment.yml \
-f openim-rpc-auth-service.yml \
-f openim-rpc-group-deployment.yml \
-f openim-rpc-group-service.yml \
-f openim-rpc-friend-deployment.yml \
-f openim-rpc-friend-service.yml \
-f openim-rpc-msg-deployment.yml \
-f openim-rpc-msg-service.yml \
-f openim-rpc-third-deployment.yml \
-f openim-rpc-third-service.yml
$ GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.1
$ kind create cluster
```
### Verification
### Installing helm
After deploying the services, verify that everything is running smoothly:
Helm simplifies the deployment and management of Kubernetes applications to a large extent by offering version control and release management through packaging.
**Using Script:**
```bash
# Check the status of all pods
kubectl get pods
# Check the status of services
kubectl get svc
# Check the status of deployments
kubectl get deployments
# View all resources
kubectl get all
$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
```
### clean all
**Adding Repository:**
`kubectl delete -f ./`
```bash
$ helm repo add brigade https://openimsdk.github.io/openim-charts
```
### Notes:
### OpenIM Image Strategy
- If you use a specific namespace for your deployment, be sure to append the -n <namespace> flag to your kubectl commands.
Automated offerings include aliyun, ghcr, docker hub: [Image Documentation](https://github.com/openimsdk/open-im-server/blob/main/docs/contrib/images.md)
**Local Test Build Method:**
```bash
$ make image
```
> This command assists in quickly building the required images locally. For a detailed build strategy, refer to the [Build Documentation](https://github.com/openimsdk/open-im-server/blob/main/build/README.md).
## Installation
Explore our Helm-Charts repository and read through: [Helm-Charts Repository](https://github.com/openimsdk/helm-charts)
Using the helm charts repository, you can ignore the following configuration, but if you want to just use the server and scale on top of it, you can go ahead:
**Use the Helm template to generate the deployment yaml file: `openim-charts.yaml`**
**Gen Image:**
```bash
../scripts/genconfig.sh ../scripts/install/environment.sh ./templates/helm-image.yaml > ./charts/generated-configs/helm-image.yaml
```
**Gen Charts:**
```bash
for chart in ./charts/*/; do
if [[ "$chart" == *"generated-configs"* || "$chart" == *"helmfile.yaml"* ]]; then
continue
fi
if [ -f "${chart}values.yaml" ]; then
helm template "$chart" -f "./charts/generated-configs/helm-image.yaml" -f "./charts/generated-configs/config.yaml" -f "./charts/generated-configs/notification.yaml" >> openim-charts.yaml
else
helm template "$chart" >> openim-charts.yaml
fi
done
```
**Use Helmfile:**
```bash
GO111MODULE=on go get github.com/roboll/helmfile@latest
```
```bash
export MONGO_ADDRESS=im-mongo
export MONGO_PORT=27017
export REDIS_ADDRESS=im-redis-master
export REDIS_PORT=6379
export KAFKA_ADDRESS=im-kafka
export KAFKA_PORT=9092
export OBJECT_APIURL="https://openim.server.com/api"
export MINIO_ENDPOINT="http://im-minio:9000"
export MINIO_SIGN_ENDPOINT="https://openim.server.com/im-minio-api"
mkdir ./charts/generated-configs
../scripts/genconfig.sh ../scripts/install/environment.sh ./templates/config.yaml > ./charts/generated-configs/config.yaml
cp ../config/notification.yaml ./charts/generated-configs/notification.yaml
../scripts/genconfig.sh ../scripts/install/environment.sh ./templates/helm-image.yaml > ./charts/generated-configs/helm-image.yaml
```
```bash
helmfile apply
```
-7
View File
@@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-kafka-secret
type: Opaque
data:
kafka-password: ""
-8
View File
@@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-minio-secret
type: Opaque
data:
minio-root-user: cm9vdA== # Base64 encoded "root"
minio-root-password: b3BlbklNMTIz # Base64 encoded "openIM123"
-79
View File
@@ -1,79 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
labels:
app: minio
spec:
replicas: 2
selector:
matchLabels:
app: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- name: minio
image: minio/minio:RELEASE.2024-01-11T07-46-16Z
ports:
- containerPort: 9000 # MinIO service port
- containerPort: 9090 # MinIO console port
volumeMounts:
- name: minio-data
mountPath: /data
- name: minio-config
mountPath: /root/.minio
env:
- name: TZ
value: "Asia/Shanghai"
- name: MINIO_ROOT_USER
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-user
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-password
command:
- "/bin/sh"
- "-c"
- |
mkdir -p /data && \
minio server /data --console-address ":9090"
volumes:
- name: minio-data
persistentVolumeClaim:
claimName: minio-pvc
- name: minio-config
persistentVolumeClaim:
claimName: minio-config-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-config-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
-8
View File
@@ -1,8 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-mongo-secret
type: Opaque
data:
mongo_openim_username: b3BlbklN # base64 for "openIM", this user credentials need in authSource database.
mongo_openim_password: b3BlbklNMTIz # base64 for "openIM123"
-108
View File
@@ -1,108 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-statefulset
spec:
serviceName: "mongo"
replicas: 2
selector:
matchLabels:
app: mongo
template:
metadata:
labels:
app: mongo
spec:
containers:
- name: mongo
image: mongo:7.0
command: ["/bin/bash", "-c"]
args:
- >
docker-entrypoint.sh mongod --wiredTigerCacheSizeGB ${wiredTigerCacheSizeGB} --auth &
until mongosh -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase admin --eval "db.runCommand({ ping: 1 })" &>/dev/null; do
echo "Waiting for MongoDB to start...";
sleep 1;
done &&
mongosh -u ${MONGO_INITDB_ROOT_USERNAME} -p ${MONGO_INITDB_ROOT_PASSWORD} --authenticationDatabase admin --eval "
db = db.getSiblingDB(\"${MONGO_INITDB_DATABASE}\");
if (!db.getUser(\"${MONGO_OPENIM_USERNAME}\")) {
db.createUser({
user: \"${MONGO_OPENIM_USERNAME}\",
pwd: \"${MONGO_OPENIM_PASSWORD}\",
roles: [{role: \"readWrite\", db: \"${MONGO_INITDB_DATABASE}\"}]
});
print(\"User created successfully: \");
print(\"Username: ${MONGO_OPENIM_USERNAME}\");
print(\"Password: ${MONGO_OPENIM_PASSWORD}\");
print(\"Database: ${MONGO_INITDB_DATABASE}\");
} else {
print(\"User already exists in database: ${MONGO_INITDB_DATABASE}, Username: ${MONGO_OPENIM_USERNAME}\");
}
" &&
tail -f /dev/null
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_initdb_root_username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_initdb_root_password
- name: MONGO_INITDB_DATABASE
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_initdb_database
- name: MONGO_OPENIM_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_openim_username
- name: MONGO_OPENIM_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-init-secret
key: mongo_openim_password
- name: TZ
value: "Asia/Shanghai"
- name: wiredTigerCacheSizeGB
value: "1"
volumeMounts:
- name: mongo-storage
mountPath: /data/db
volumes:
- name: mongo-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: v1
kind: Secret
metadata:
name: openim-mongo-init-secret
type: Opaque
data:
mongo_initdb_root_username: cm9vdA== # base64 for "root"
mongo_initdb_root_password: b3BlbklNMTIz # base64 for "openIM123"
mongo_initdb_database: b3BlbmltX3Yz # base64 for "openim_v3"
mongo_openim_username: b3BlbklN # base64 for "openIM"
mongo_openim_password: b3BlbklNMTIz # base64 for "openIM123"
@@ -1,47 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: openim-api
spec:
replicas: 2
selector:
matchLabels:
app: openim-api
template:
metadata:
labels:
app: openim-api
spec:
containers:
- name: openim-api-container
image: openim/openim-api:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10002
- containerPort: 12002
volumes:
- name: openim-config
configMap:
name: openim-config
File diff suppressed because it is too large Load Diff
@@ -1,36 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: messagegateway-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: messagegateway-rpc-server
template:
metadata:
labels:
app: messagegateway-rpc-server
spec:
containers:
- name: openim-msggateway-container
image: openim/openim-msggateway:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10140
- containerPort: 12001
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,50 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: openim-msgtransfer-server
spec:
replicas: 2
selector:
matchLabels:
app: openim-msgtransfer-server
template:
metadata:
labels:
app: openim-msgtransfer-server
spec:
containers:
- name: openim-msgtransfer-container
image: openim/openim-msgtransfer:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 12020
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,41 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: push-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: push-rpc-server
template:
metadata:
labels:
app: push-rpc-server
spec:
containers:
- name: push-rpc-server-container
image: openim/openim-push:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10170
- containerPort: 12170
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,37 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: auth-rpc-server
template:
metadata:
labels:
app: auth-rpc-server
spec:
containers:
- name: auth-rpc-server-container
image: openim/openim-rpc-auth:v3.8.3
imagePullPolicy: Never
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10200
- containerPort: 12200
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,46 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: conversation-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: conversation-rpc-server
template:
metadata:
labels:
app: conversation-rpc-server
spec:
containers:
- name: conversation-rpc-server-container
image: openim/openim-rpc-conversation:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10220
- containerPort: 12220
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,46 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: friend-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: friend-rpc-server
template:
metadata:
labels:
app: friend-rpc-server
spec:
containers:
- name: friend-rpc-server-container
image: openim/openim-rpc-friend:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10240
- containerPort: 12240
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,46 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: group-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: group-rpc-server
template:
metadata:
labels:
app: group-rpc-server
spec:
containers:
- name: group-rpc-server-container
image: openim/openim-rpc-group:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10260
- containerPort: 12260
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,51 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: msg-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: msg-rpc-server
template:
metadata:
labels:
app: msg-rpc-server
spec:
containers:
- name: msg-rpc-server-container
image: openim/openim-rpc-msg:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10280
- containerPort: 12280
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,56 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: third-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: third-rpc-server
template:
metadata:
labels:
app: third-rpc-server
spec:
containers:
- name: third-rpc-server-container
image: openim/openim-rpc-third:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_MINIO_ACCESSKEYID
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-user
- name: IMENV_MINIO_SECRETACCESSKEY
valueFrom:
secretKeyRef:
name: openim-minio-secret
key: minio-root-password
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10300
- containerPort: 12300
volumes:
- name: openim-config
configMap:
name: openim-config
@@ -1,51 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-rpc-server
spec:
replicas: 2
selector:
matchLabels:
app: user-rpc-server
template:
metadata:
labels:
app: user-rpc-server
spec:
containers:
- name: user-rpc-server-container
image: openim/openim-rpc-user:v3.8.3
env:
- name: CONFIG_PATH
value: "/config"
- name: IMENV_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: openim-redis-secret
key: redis-password
- name: IMENV_MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_username
- name: IMENV_MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: openim-mongo-secret
key: mongo_openim_password
- name: IMENV_KAFKA_PASSWORD
valueFrom:
secretKeyRef:
name: openim-kafka-secret
key: kafka-password
volumeMounts:
- name: openim-config
mountPath: "/config"
readOnly: true
ports:
- containerPort: 10320
- containerPort: 12320
volumes:
- name: openim-config
configMap:
name: openim-config
-7
View File
@@ -1,7 +0,0 @@
apiVersion: v1
kind: Secret
metadata:
name: openim-redis-secret
type: Opaque
data:
redis-password: b3BlbklNMTIz # "openIM123" in base64
-55
View File
@@ -1,55 +0,0 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-statefulset
spec:
serviceName: "redis"
replicas: 2
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7.0.0
ports:
- containerPort: 6379
env:
- name: TZ
value: "Asia/Shanghai"
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secret
key: redis-password
volumeMounts:
- name: redis-data
mountPath: /data
command:
[
"/bin/sh",
"-c",
'redis-server --requirepass "$REDIS_PASSWORD" --appendonly yes',
]
volumes:
- name: redis-config-volume
configMap:
name: openim-config
- name: redis-data
persistentVolumeClaim:
claimName: redis-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
+46 -66
View File
@@ -37,7 +37,6 @@ services:
- "${DATA_DIR}/components/mongodb/data/db:/data/db"
- "${DATA_DIR}/components/mongodb/data/logs:/data/logs"
- "${DATA_DIR}/components/mongodb/data/conf:/etc/mongo"
- "${MONGO_BACKUP_DIR}:/data/backup"
environment:
- TZ=Asia/Shanghai
- wiredTigerCacheSizeGB=1
@@ -147,68 +146,49 @@ services:
networks:
- openim
prometheus:
image: ${PROMETHEUS_IMAGE}
container_name: prometheus
restart: always
user: root
profiles:
- m
volumes:
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml
- ./config/instance-down-rules.yml:/etc/prometheus/instance-down-rules.yml
- ${DATA_DIR}/components/prometheus/data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.listen-address=:${PROMETHEUS_PORT}'
network_mode: host
alertmanager:
image: ${ALERTMANAGER_IMAGE}
container_name: alertmanager
restart: always
profiles:
- m
volumes:
- ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml
- ./config/email.tmpl:/etc/alertmanager/email.tmpl
command:
- '--config.file=/etc/alertmanager/alertmanager.yml'
- '--web.listen-address=:${ALERTMANAGER_PORT}'
network_mode: host
grafana:
image: ${GRAFANA_IMAGE}
container_name: grafana
user: root
restart: always
profiles:
- m
environment:
- GF_SECURITY_ALLOW_EMBEDDING=true
- GF_SESSION_COOKIE_SAMESITE=none
- GF_SESSION_COOKIE_SECURE=true
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
- GF_SERVER_HTTP_PORT=${GRAFANA_PORT}
volumes:
- ${DATA_DIR:-./}/components/grafana:/var/lib/grafana
network_mode: host
node-exporter:
image: ${NODE_EXPORTER_IMAGE}
container_name: node-exporter
restart: always
profiles:
- m
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
- '--web.listen-address=:${NODE_EXPORTER_PORT}'
network_mode: host
# prometheus:
# image: ${PROMETHEUS_IMAGE}
# container_name: prometheus
# restart: always
# user: root
# volumes:
# - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
# - ./config/instance-down-rules.yml:/etc/prometheus/instance-down-rules.yml
# - ${DATA_DIR}/components/prometheus/data:/prometheus
# command:
# - '--config.file=/etc/prometheus/prometheus.yml'
# - '--storage.tsdb.path=/prometheus'
# ports:
# - "19091:9090"
# networks:
# - openim
#
# alertmanager:
# image: ${ALERTMANAGER_IMAGE}
# container_name: alertmanager
# restart: always
# volumes:
# - ./config/alertmanager.yml:/etc/alertmanager/alertmanager.yml
# - ./config/email.tmpl:/etc/alertmanager/email.tmpl
# ports:
# - "19093:9093"
# networks:
# - openim
#
# grafana:
# image: ${GRAFANA_IMAGE}
# container_name: grafana
# user: root
# restart: always
# environment:
# - GF_SECURITY_ALLOW_EMBEDDING=true
# - GF_SESSION_COOKIE_SAMESITE=none
# - GF_SESSION_COOKIE_SECURE=true
# - GF_AUTH_ANONYMOUS_ENABLED=true
# - GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
# ports:
# - "13000:3000"
# volumes:
# - ${DATA_DIR:-./}/components/grafana:/var/lib/grafana
# networks:
# - openim
+25 -23
View File
@@ -1,6 +1,8 @@
module github.com/openimsdk/open-im-server/v3
go 1.22.7
go 1.22.0
toolchain go1.23.2
require (
firebase.google.com/go/v4 v4.14.1
@@ -12,15 +14,15 @@ require (
github.com/gorilla/websocket v1.5.1
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0
github.com/mitchellh/mapstructure v1.5.0
github.com/openimsdk/protocol v0.0.72-alpha.71
github.com/openimsdk/tools v0.0.50-alpha.72
github.com/openimsdk/protocol v0.0.72-alpha.55
github.com/openimsdk/tools v0.0.50-alpha.32
github.com/pkg/errors v0.9.1 // indirect
github.com/prometheus/client_golang v1.18.0
github.com/stretchr/testify v1.9.0
go.mongodb.org/mongo-driver v1.14.0
google.golang.org/api v0.170.0
google.golang.org/grpc v1.68.0
google.golang.org/protobuf v1.35.1
google.golang.org/grpc v1.66.2
google.golang.org/protobuf v1.34.2
gopkg.in/yaml.v3 v3.0.1
)
@@ -41,38 +43,38 @@ require (
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/spf13/viper v1.18.2
github.com/stathat/consistent v1.0.0
go.etcd.io/etcd/client/v3 v3.5.13
go.uber.org/automaxprocs v1.5.3
golang.org/x/exp v0.0.0-20230905200255-921286631fa9
golang.org/x/sync v0.8.0
)
require (
cloud.google.com/go v0.112.1 // indirect
cloud.google.com/go/compute/metadata v0.5.0 // indirect
cloud.google.com/go/compute/metadata v0.3.0 // indirect
cloud.google.com/go/firestore v1.15.0 // indirect
cloud.google.com/go/iam v1.1.7 // indirect
cloud.google.com/go/longrunning v0.5.5 // indirect
cloud.google.com/go/storage v1.40.0 // indirect
github.com/MicahParks/keyfunc v1.9.0 // indirect
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible // indirect
github.com/aws/aws-sdk-go-v2 v1.32.5 // indirect
github.com/aws/aws-sdk-go-v2 v1.23.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 // indirect
github.com/aws/aws-sdk-go-v2/config v1.28.5 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.17.46 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
github.com/aws/aws-sdk-go-v2/config v1.25.4 // indirect
github.com/aws/aws-sdk-go-v2/credentials v1.16.3 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 // indirect
github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.24.6 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.33.1 // indirect
github.com/aws/smithy-go v1.22.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.17.3 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.25.4 // indirect
github.com/aws/smithy-go v1.17.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bytedance/sonic v1.11.6 // indirect
github.com/bytedance/sonic/loader v0.1.1 // indirect
@@ -163,6 +165,7 @@ require (
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.etcd.io/etcd/api/v3 v3.5.13 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.13 // indirect
go.etcd.io/etcd/client/v3 v3.5.13 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.49.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.49.0 // indirect
@@ -172,16 +175,15 @@ require (
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
golang.org/x/arch v0.7.0 // indirect
golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect
golang.org/x/image v0.15.0 // indirect
golang.org/x/net v0.29.0 // indirect
golang.org/x/oauth2 v0.23.0 // indirect
golang.org/x/oauth2 v0.21.0 // indirect
golang.org/x/sys v0.25.0 // indirect
golang.org/x/text v0.18.0 // indirect
golang.org/x/time v0.5.0 // indirect
google.golang.org/appengine/v2 v2.0.2 // indirect
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect
gorm.io/gorm v1.25.8 // indirect
stathat.com/c/consistent v1.0.0 // indirect
+40 -40
View File
@@ -1,8 +1,8 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.112.1 h1:uJSeirPke5UNZHIb4SxfZklVSiWWVqW4oXlETwZziwM=
cloud.google.com/go v0.112.1/go.mod h1:+Vbu+Y1UU+I1rjmzeMOb/8RfkKJK2Gyxi1X6jJCZLo4=
cloud.google.com/go/compute/metadata v0.5.0 h1:Zr0eK8JbFv6+Wi4ilXAR8FJ3wyNdpxHKJNPos6LTZOY=
cloud.google.com/go/compute/metadata v0.5.0/go.mod h1:aHnloV2TPI38yx4s9+wAZhHykWvVCfu7hQbF+9CWoiY=
cloud.google.com/go/compute/metadata v0.3.0 h1:Tz+eQXMEqDIKRsmY3cHTL6FVaynIjX2QxYC4trgAKZc=
cloud.google.com/go/compute/metadata v0.3.0/go.mod h1:zFmK7XCadkQkj6TtorcaGlCW1hT1fIilQDwofLpJ20k=
cloud.google.com/go/firestore v1.15.0 h1:/k8ppuWOtNuDHt2tsRV42yI21uaGnKDEQnRFeBpbFF8=
cloud.google.com/go/firestore v1.15.0/go.mod h1:GWOxFXcv8GZUtYpWHw/w6IuYNux/BtmeVTMmjrm4yhk=
cloud.google.com/go/iam v1.1.7 h1:z4VHOhwKLF/+UYXAJDFwGtNF0b6gjsW1Pk9Ml0U/IoM=
@@ -21,42 +21,42 @@ github.com/MicahParks/keyfunc v1.9.0/go.mod h1:IdnCilugA0O/99dW+/MkvlyrsX8+L8+x9
github.com/QcloudApi/qcloud_sign_golang v0.0.0-20141224014652-e4130a326409/go.mod h1:1pk82RBxDY/JZnPQrtqHlUFfCctgdorsd9M06fMynOM=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible h1:8psS8a+wKfiLt1iVDX79F7Y6wUM49Lcha2FMXt4UM8g=
github.com/aliyun/aliyun-oss-go-sdk v3.0.2+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
github.com/aws/aws-sdk-go-v2 v1.32.5 h1:U8vdWJuY7ruAkzaOdD7guwJjD06YSKmnKCJs7s3IkIo=
github.com/aws/aws-sdk-go-v2 v1.32.5/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go-v2 v1.23.1 h1:qXaFsOOMA+HsZtX8WoCa+gJnbyW7qyFFBlPqvTSzbaI=
github.com/aws/aws-sdk-go-v2 v1.23.1/go.mod h1:i1XDttT4rnf6vxc9AuskLc6s7XBee8rlLilKlc03uAA=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1 h1:ZY3108YtBNq96jNZTICHxN1gSBSbnvIdYwwqnvCV4Mc=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.5.1/go.mod h1:t8PYl/6LzdAqsU4/9tz28V/kU+asFePvpOMkdul0gEQ=
github.com/aws/aws-sdk-go-v2/config v1.28.5 h1:Za41twdCXbuyyWv9LndXxZZv3QhTG1DinqlFsSuvtI0=
github.com/aws/aws-sdk-go-v2/config v1.28.5/go.mod h1:4VsPbHP8JdcdUDmbTVgNL/8w9SqOkM5jyY8ljIxLO3o=
github.com/aws/aws-sdk-go-v2/credentials v1.17.46 h1:AU7RcriIo2lXjUfHFnFKYsLCwgbz1E7Mm95ieIRDNUg=
github.com/aws/aws-sdk-go-v2/credentials v1.17.46/go.mod h1:1FmYyLGL08KQXQ6mcTlifyFXfJVCNJTVGuQP4m0d/UA=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20 h1:sDSXIrlsFSFJtWKLQS4PUWRvrT580rrnuLydJrCQ/yA=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.20/go.mod h1:WZ/c+w0ofps+/OUqMwWgnfrgzZH1DZO1RIkktICsqnY=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24 h1:4usbeaes3yJnCFC7kfeyhkdkPtoRYPa/hTmCqMpKpLI=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.24/go.mod h1:5CI1JemjVwde8m2WG3cz23qHKPOxbpkq0HaoreEgLIY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24 h1:N1zsICrQglfzaBnrfM0Ys00860C+QFwu6u/5+LomP+o=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.24/go.mod h1:dCn9HbJ8+K31i8IQ8EWmWj0EiIk0+vKiHNMxTTYveAg=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
github.com/aws/aws-sdk-go-v2/config v1.25.4 h1:r+X1x8QI6FEPdJDWCNBDZHyAcyFwSjHN8q8uuus+Axs=
github.com/aws/aws-sdk-go-v2/config v1.25.4/go.mod h1:8GTjImECskr7D88P/Nn9uM4M4rLY9i77hLJZgkZEWV8=
github.com/aws/aws-sdk-go-v2/credentials v1.16.3 h1:8PeI2krzzjDJ5etmgaMiD1JswsrLrWvKKu/uBUtNy1g=
github.com/aws/aws-sdk-go-v2/credentials v1.16.3/go.mod h1:Kdh/okh+//vQ/AjEt81CjvkTo64+/zIE4OewP7RpfXk=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5 h1:KehRNiVzIfAcj6gw98zotVbb/K67taJE0fkfgM6vzqU=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.14.5/go.mod h1:VhnExhw6uXy9QzetvpXDolo1/hjhx4u9qukBGkuUwjs=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4 h1:LAm3Ycm9HJfbSCd5I+wqC2S9Ej7FPrgr5CQoOljJZcE=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.2.4/go.mod h1:xEhvbJcyUf/31yfGSQBe01fukXwXJ0gxDp7rLfymWE0=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4 h1:4GV0kKZzUxiWxSVpn/9gwR0g21NF1Jsyduzo9rHgC/Q=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.5.4/go.mod h1:dYvTNAggxDZy6y1AF7YDwXsPuHFy/VNEpEI/2dWK9IU=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1 h1:uR9lXYjdPX0xY+NhvaJ4dD8rpSRz5VY81ccIIoNG+lw=
github.com/aws/aws-sdk-go-v2/internal/ini v1.7.1/go.mod h1:6fQQgfuGmw8Al/3M2IgIllycxV7ZW7WCdVSqfBeUiCY=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4 h1:40Q4X5ebZruRtknEZH/bg91sT5pR853F7/1X9QRbI54=
github.com/aws/aws-sdk-go-v2/internal/v4a v1.2.4/go.mod h1:u77N7eEECzUv7F0xl2gcfK/vzc8wcjWobpy+DcrLJ5E=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1 h1:iXtILhvDxB6kPvEXgsDhGaZCSC6LQET5ZHSdJozeI0Y=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.1/go.mod h1:9nu0fVANtYiAePIBh2/pFUSwtJ402hLnp854CNoDOeE=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1 h1:rpkF4n0CyFcrJUG/rNNohoTmhtWlFTRI4BsZOh9PvLs=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.10.1/go.mod h1:l9ymW25HOqymeU2m1gbUQ3rUIsTwKs8gYHXkqDQUhiI=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4 h1:6DRKQc+9cChgzL5gplRGusI5dBGeiEod4m/pmGbcX48=
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.2.4/go.mod h1:s8ORvrW4g4v7IvYKIAoBg17w3GQ+XuwXDXYrQ5SkzU0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5 h1:wtpJ4zcwrSbwhECWQoI/g6WM9zqCcSpHDJIWSbMLOu4=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.5/go.mod h1:qu/W9HXQbbQ4+1+JcZp0ZNPV31ym537ZJN+fiS7Ti8E=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4 h1:rdovz3rEu0vZKbzoMYPTehp0E8veoE9AyfzqCr5Eeao=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.10.4/go.mod h1:aYCGNjyUCUelhofxlZyj63srdxWUSsBSGg5l6MCuXuE=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4 h1:o3DcfCxGDIT20pTbVKVhp3vWXOj/VvgazNJvumWeYW0=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.16.4/go.mod h1:Uy0KVOxuTK2ne+/PKQ+VvEeWmjMMksE17k/2RK/r5oM=
github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1 h1:1w11lfXOa8HoHoSlNtt4mqv/N3HmDOa+OnUH3Y9DHm8=
github.com/aws/aws-sdk-go-v2/service/s3 v1.43.1/go.mod h1:dqJ5JBL0clzgHriH35Amx3LRFY6wNIPUX7QO/BerSBo=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.6 h1:3zu537oLmsPfDMyjnUS2g+F2vITgy5pB74tHI+JBNoM=
github.com/aws/aws-sdk-go-v2/service/sso v1.24.6/go.mod h1:WJSZH2ZvepM6t6jwu4w/Z45Eoi75lPN7DcydSRtJg6Y=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5 h1:K0OQAsDywb0ltlFrZm0JHPY3yZp/S9OaoLU33S7vPS8=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.5/go.mod h1:ORITg+fyuMoeiQFiVGoqB3OydVTLkClw/ljbblMq6Cc=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.1 h1:6SZUVRQNvExYlMLbHdlKB48x0fLbc2iVROyaNEwBHbU=
github.com/aws/aws-sdk-go-v2/service/sts v1.33.1/go.mod h1:GqWyYCwLXnlUB1lOAXQyNSPqPLQJvmo8J0DWBzp9mtg=
github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/aws/aws-sdk-go-v2/service/sso v1.17.3 h1:CdsSOGlFF3Pn+koXOIpTtvX7st0IuGsZ8kJqcWMlX54=
github.com/aws/aws-sdk-go-v2/service/sso v1.17.3/go.mod h1:oA6VjNsLll2eVuUoF2D+CMyORgNzPEW/3PyUdq6WQjI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1 h1:cbRqFTVnJV+KRpwFl76GJdIZJKKCdTPnjUZ7uWh3pIU=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.20.1/go.mod h1:hHL974p5auvXlZPIjJTblXJpbkfK4klBczlsEaMCGVY=
github.com/aws/aws-sdk-go-v2/service/sts v1.25.4 h1:yEvZ4neOQ/KpUqyR+X0ycUTW/kVRNR4nDZ38wStHGAA=
github.com/aws/aws-sdk-go-v2/service/sts v1.25.4/go.mod h1:feTnm2Tk/pJxdX+eooEsxvlvTWBvDm6CasRZ+JOs2IY=
github.com/aws/smithy-go v1.17.0 h1:wWJD7LX6PBV6etBUwO0zElG0nWN9rUhp0WdYeHSHAaI=
github.com/aws/smithy-go v1.17.0/go.mod h1:NukqUGpCZIILqqiV0NIjeFh24kd/FAa4beRb6nbIUPE=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -319,10 +319,10 @@ github.com/onsi/gomega v1.25.0 h1:Vw7br2PCDYijJHSfBOWhov+8cAnUf8MfMaIOV323l6Y=
github.com/onsi/gomega v1.25.0/go.mod h1:r+zV744Re+DiYCIPRlYOTxn0YkOLcAnW8k1xXdMPGhM=
github.com/openimsdk/gomake v0.0.14-alpha.5 h1:VY9c5x515lTfmdhhPjMvR3BBRrRquAUCFsz7t7vbv7Y=
github.com/openimsdk/gomake v0.0.14-alpha.5/go.mod h1:PndCozNc2IsQIciyn9mvEblYWZwJmAI+06z94EY+csI=
github.com/openimsdk/protocol v0.0.72-alpha.71 h1:R3utzOlqepaJWTAmnfJi4ccUM/XIoFasSyjQMOipM70=
github.com/openimsdk/protocol v0.0.72-alpha.71/go.mod h1:WF7EuE55vQvpyUAzDXcqg+B+446xQyEba0X35lTINmw=
github.com/openimsdk/tools v0.0.50-alpha.72 h1:d/vaZjIfvrNp3EeRJEIiamBO7HiPx6CP4wiuq8NpfzY=
github.com/openimsdk/tools v0.0.50-alpha.72/go.mod h1:B+oqV0zdewN7OiEHYJm+hW+8/Te7B8tHHgD8rK5ZLZk=
github.com/openimsdk/protocol v0.0.72-alpha.55 h1:9PPWPHvkFk3neBSbNr+IoOdKIFjxTvEqUfMK/TEq1+8=
github.com/openimsdk/protocol v0.0.72-alpha.55/go.mod h1:OZQA9FR55lseYoN2Ql1XAHYKHJGu7OMNkUbuekrKCM8=
github.com/openimsdk/tools v0.0.50-alpha.32 h1:JEsUFHFnaYg230TG+Ke3SUnaA2h44t4kABAzEdv5VZw=
github.com/openimsdk/tools v0.0.50-alpha.32/go.mod h1:r5U6RbxcR4xhKb2fhTmKGC9Yt5LcErHBVt3lhXQIHSo=
github.com/pelletier/go-toml/v2 v2.2.2 h1:aYUidT7k73Pcl9nb2gScu7NSrKCSHIDE89b3+6Wq+LM=
github.com/pelletier/go-toml/v2 v2.2.2/go.mod h1:1t835xjRzz80PqgE6HHgN2JOsmgYu/h4qDAS4n929Rs=
github.com/pierrec/lz4/v4 v4.1.21 h1:yOVMLb6qSIDP67pl/5F7RepeKYu/VmTyEXvuMI5d9mQ=
@@ -497,8 +497,8 @@ golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.29.0 h1:5ORfpBpCs4HzDYoodCDBbwHzdR5UrLBZ3sOnUJmFoHo=
golang.org/x/net v0.29.0/go.mod h1:gLkgy8jTGERgjzMic6DS9+SP0ajcu6Xu3Orq/SpETg0=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs=
golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/oauth2 v0.21.0 h1:tsimM75w1tF/uws5rbeHzIWxEqElMehnc+iW793zsZs=
golang.org/x/oauth2 v0.21.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -565,8 +565,8 @@ google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9 h1:9+tzLLstTlPTRyJTh+ah5wIMsBW5c4tQwGTN3thOW9Y=
google.golang.org/genproto v0.0.0-20240213162025-012b6fc9bca9/go.mod h1:mqHbVIp48Muh7Ywss/AD6I5kNVKZMmAa/QEW58Gxp2s=
google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 h1:hjSy6tcFQZ171igDaN5QHOw2n6vx40juYbC/x67CEhc=
google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:qpvKtACPCQhAdu3PyQgV4l3LMXZEtft7y8QcarRsp9I=
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117 h1:+rdxYoE3E5htTEWIe15GlN6IfvbURM//Jt0mmkmm6ZU=
google.golang.org/genproto/googleapis/api v0.0.0-20240604185151-ef581f913117/go.mod h1:OimBR/bc1wPO9iV4NC2bpyjy3VnAwZh5EBPQdtaE5oo=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ=
google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -574,8 +574,8 @@ google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyac
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.68.0 h1:aHQeeJbo8zAkAa3pRzrVjZlbz6uSfeOXlJNQM0RAbz0=
google.golang.org/grpc v1.68.0/go.mod h1:fmSPC5AsjSBCK54MyHRx48kpOti1/jRfOlwEWywNjWA=
google.golang.org/grpc v1.66.2 h1:3QdXkuq3Bkh7w+ywLdLvM56cmGvQHUMZpiCzt6Rqaoo=
google.golang.org/grpc v1.66.2/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
@@ -585,8 +585,8 @@ google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA=
google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
+8 -9
View File
@@ -16,30 +16,29 @@ package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/auth"
"github.com/openimsdk/tools/a2r"
)
type AuthApi struct {
Client auth.AuthClient
}
type AuthApi rpcclient.Auth
func NewAuthApi(client auth.AuthClient) AuthApi {
return AuthApi{client}
func NewAuthApi(client rpcclient.Auth) AuthApi {
return AuthApi(client)
}
func (o *AuthApi) GetAdminToken(c *gin.Context) {
a2r.Call(c, auth.AuthClient.GetAdminToken, o.Client)
a2r.Call(auth.AuthClient.GetAdminToken, o.Client, c)
}
func (o *AuthApi) GetUserToken(c *gin.Context) {
a2r.Call(c, auth.AuthClient.GetUserToken, o.Client)
a2r.Call(auth.AuthClient.GetUserToken, o.Client, c)
}
func (o *AuthApi) ParseToken(c *gin.Context) {
a2r.Call(c, auth.AuthClient.ParseToken, o.Client)
a2r.Call(auth.AuthClient.ParseToken, o.Client, c)
}
func (o *AuthApi) ForceLogout(c *gin.Context) {
a2r.Call(c, auth.AuthClient.ForceLogout, o.Client)
a2r.Call(auth.AuthClient.ForceLogout, o.Client, c)
}
-313
View File
@@ -1,313 +0,0 @@
package api
//
//import (
// "encoding/json"
// "reflect"
// "strconv"
// "time"
//
// "github.com/gin-gonic/gin"
// "github.com/openimsdk/open-im-server/v3/pkg/apistruct"
// "github.com/openimsdk/open-im-server/v3/pkg/authverify"
// "github.com/openimsdk/open-im-server/v3/pkg/common/config"
// "github.com/openimsdk/open-im-server/v3/pkg/common/discovery/etcd"
// "github.com/openimsdk/open-im-server/v3/version"
// "github.com/openimsdk/tools/apiresp"
// "github.com/openimsdk/tools/errs"
// "github.com/openimsdk/tools/log"
// "github.com/openimsdk/tools/utils/runtimeenv"
// clientv3 "go.etcd.io/etcd/client/v3"
//)
//
//const (
// // wait for Restart http call return
// waitHttp = time.Millisecond * 200
//)
//
//type ConfigManager struct {
// imAdminUserID []string
// config *config.AllConfig
// client *clientv3.Client
//
// configPath string
// runtimeEnv string
//}
//
//func NewConfigManager(IMAdminUserID []string, cfg *config.AllConfig, client *clientv3.Client, configPath string, runtimeEnv string) *ConfigManager {
// cm := &ConfigManager{
// imAdminUserID: IMAdminUserID,
// config: cfg,
// client: client,
// configPath: configPath,
// runtimeEnv: runtimeEnv,
// }
// return cm
//}
//
//func (cm *ConfigManager) CheckAdmin(c *gin.Context) {
// if err := authverify.CheckAdmin(c, cm.imAdminUserID); err != nil {
// apiresp.GinError(c, err)
// c.Abort()
// }
//}
//
//func (cm *ConfigManager) GetConfig(c *gin.Context) {
// var req apistruct.GetConfigReq
// if err := c.BindJSON(&req); err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// conf := cm.config.Name2Config(req.ConfigName)
// if conf == nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail("config name not found").Wrap())
// return
// }
// b, err := json.Marshal(conf)
// if err != nil {
// apiresp.GinError(c, err)
// return
// }
// apiresp.GinSuccess(c, string(b))
//}
//
//func (cm *ConfigManager) GetConfigList(c *gin.Context) {
// var resp apistruct.GetConfigListResp
// resp.ConfigNames = cm.config.GetConfigNames()
// resp.Environment = runtimeenv.PrintRuntimeEnvironment()
// resp.Version = version.Version
//
// apiresp.GinSuccess(c, resp)
//}
//
//func (cm *ConfigManager) SetConfig(c *gin.Context) {
// if cm.config.Discovery.Enable != config.ETCD {
// apiresp.GinError(c, errs.New("only etcd support set config").Wrap())
// return
// }
// var req apistruct.SetConfigReq
// if err := c.BindJSON(&req); err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// var err error
// switch req.ConfigName {
// case cm.config.Discovery.GetConfigFileName():
// err = compareAndSave[config.Discovery](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Kafka.GetConfigFileName():
// err = compareAndSave[config.Kafka](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.LocalCache.GetConfigFileName():
// err = compareAndSave[config.LocalCache](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Log.GetConfigFileName():
// err = compareAndSave[config.Log](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Minio.GetConfigFileName():
// err = compareAndSave[config.Minio](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Mongo.GetConfigFileName():
// err = compareAndSave[config.Mongo](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Notification.GetConfigFileName():
// err = compareAndSave[config.Notification](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.API.GetConfigFileName():
// err = compareAndSave[config.API](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.CronTask.GetConfigFileName():
// err = compareAndSave[config.CronTask](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.MsgGateway.GetConfigFileName():
// err = compareAndSave[config.MsgGateway](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.MsgTransfer.GetConfigFileName():
// err = compareAndSave[config.MsgTransfer](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Push.GetConfigFileName():
// err = compareAndSave[config.Push](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Auth.GetConfigFileName():
// err = compareAndSave[config.Auth](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Conversation.GetConfigFileName():
// err = compareAndSave[config.Conversation](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Friend.GetConfigFileName():
// err = compareAndSave[config.Friend](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Group.GetConfigFileName():
// err = compareAndSave[config.Group](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Msg.GetConfigFileName():
// err = compareAndSave[config.Msg](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Third.GetConfigFileName():
// err = compareAndSave[config.Third](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.User.GetConfigFileName():
// err = compareAndSave[config.User](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Redis.GetConfigFileName():
// err = compareAndSave[config.Redis](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Share.GetConfigFileName():
// err = compareAndSave[config.Share](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// case cm.config.Webhooks.GetConfigFileName():
// err = compareAndSave[config.Webhooks](c, cm.config.Name2Config(req.ConfigName), &req, cm)
// default:
// apiresp.GinError(c, errs.ErrArgs.Wrap())
// return
// }
// if err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// apiresp.GinSuccess(c, nil)
//}
//
//func compareAndSave[T any](c *gin.Context, old any, req *apistruct.SetConfigReq, cm *ConfigManager) error {
// conf := new(T)
// err := json.Unmarshal([]byte(req.Data), &conf)
// if err != nil {
// return errs.ErrArgs.WithDetail(err.Error()).Wrap()
// }
// eq := reflect.DeepEqual(old, conf)
// if eq {
// return nil
// }
// data, err := json.Marshal(conf)
// if err != nil {
// return errs.ErrArgs.WithDetail(err.Error()).Wrap()
// }
// _, err = cm.client.Put(c, etcd.BuildKey(req.ConfigName), string(data))
// if err != nil {
// return errs.WrapMsg(err, "save to etcd failed")
// }
// return nil
//}
//
//func (cm *ConfigManager) ResetConfig(c *gin.Context) {
// go func() {
// if err := cm.resetConfig(c, true); err != nil {
// log.ZError(c, "reset config err", err)
// }
// }()
// apiresp.GinSuccess(c, nil)
//}
//
//func (cm *ConfigManager) resetConfig(c *gin.Context, checkChange bool, ops ...clientv3.Op) error {
// txn := cm.client.Txn(c)
// type initConf struct {
// old any
// new any
// }
// configMap := map[string]*initConf{
// cm.config.Discovery.GetConfigFileName(): {old: &cm.config.Discovery, new: new(config.Discovery)},
// cm.config.Kafka.GetConfigFileName(): {old: &cm.config.Kafka, new: new(config.Kafka)},
// cm.config.LocalCache.GetConfigFileName(): {old: &cm.config.LocalCache, new: new(config.LocalCache)},
// cm.config.Log.GetConfigFileName(): {old: &cm.config.Log, new: new(config.Log)},
// cm.config.Minio.GetConfigFileName(): {old: &cm.config.Minio, new: new(config.Minio)},
// cm.config.Mongo.GetConfigFileName(): {old: &cm.config.Mongo, new: new(config.Mongo)},
// cm.config.Notification.GetConfigFileName(): {old: &cm.config.Notification, new: new(config.Notification)},
// cm.config.API.GetConfigFileName(): {old: &cm.config.API, new: new(config.API)},
// cm.config.CronTask.GetConfigFileName(): {old: &cm.config.CronTask, new: new(config.CronTask)},
// cm.config.MsgGateway.GetConfigFileName(): {old: &cm.config.MsgGateway, new: new(config.MsgGateway)},
// cm.config.MsgTransfer.GetConfigFileName(): {old: &cm.config.MsgTransfer, new: new(config.MsgTransfer)},
// cm.config.Push.GetConfigFileName(): {old: &cm.config.Push, new: new(config.Push)},
// cm.config.Auth.GetConfigFileName(): {old: &cm.config.Auth, new: new(config.Auth)},
// cm.config.Conversation.GetConfigFileName(): {old: &cm.config.Conversation, new: new(config.Conversation)},
// cm.config.Friend.GetConfigFileName(): {old: &cm.config.Friend, new: new(config.Friend)},
// cm.config.Group.GetConfigFileName(): {old: &cm.config.Group, new: new(config.Group)},
// cm.config.Msg.GetConfigFileName(): {old: &cm.config.Msg, new: new(config.Msg)},
// cm.config.Third.GetConfigFileName(): {old: &cm.config.Third, new: new(config.Third)},
// cm.config.User.GetConfigFileName(): {old: &cm.config.User, new: new(config.User)},
// cm.config.Redis.GetConfigFileName(): {old: &cm.config.Redis, new: new(config.Redis)},
// cm.config.Share.GetConfigFileName(): {old: &cm.config.Share, new: new(config.Share)},
// cm.config.Webhooks.GetConfigFileName(): {old: &cm.config.Webhooks, new: new(config.Webhooks)},
// }
//
// changedKeys := make([]string, 0, len(configMap))
// for k, v := range configMap {
// err := config.Load(
// cm.configPath,
// k,
// config.EnvPrefixMap[k],
// cm.runtimeEnv,
// v.new,
// )
// if err != nil {
// log.ZError(c, "load config failed", err)
// continue
// }
// equal := reflect.DeepEqual(v.old, v.new)
// if !checkChange || !equal {
// changedKeys = append(changedKeys, k)
// }
// }
//
// for _, k := range changedKeys {
// data, err := json.Marshal(configMap[k].new)
// if err != nil {
// log.ZError(c, "marshal config failed", err)
// continue
// }
// ops = append(ops, clientv3.OpPut(etcd.BuildKey(k), string(data)))
// }
// if len(ops) > 0 {
// txn.Then(ops...)
// _, err := txn.Commit()
// if err != nil {
// return errs.WrapMsg(err, "commit etcd txn failed")
// }
// }
// return nil
//}
//
//func (cm *ConfigManager) Restart(c *gin.Context) {
// go cm.restart(c)
// apiresp.GinSuccess(c, nil)
//}
//
//func (cm *ConfigManager) restart(c *gin.Context) {
// time.Sleep(waitHttp) // wait for Restart http call return
// t := time.Now().Unix()
// _, err := cm.client.Put(c, etcd.BuildKey(etcd.RestartKey), strconv.Itoa(int(t)))
// if err != nil {
// log.ZError(c, "restart etcd put key failed", err)
// }
//}
//
//func (cm *ConfigManager) SetEnableConfigManager(c *gin.Context) {
// if cm.config.Discovery.Enable != config.ETCD {
// apiresp.GinError(c, errs.New("only etcd support config manager").Wrap())
// return
// }
// var req apistruct.SetEnableConfigManagerReq
// if err := c.BindJSON(&req); err != nil {
// apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
// return
// }
// var enableStr string
// if req.Enable {
// enableStr = etcd.Enable
// } else {
// enableStr = etcd.Disable
// }
// resp, err := cm.client.Get(c, etcd.BuildKey(etcd.EnableConfigCenterKey))
// if err != nil {
// apiresp.GinError(c, errs.WrapMsg(err, "getEnableConfigManager failed"))
// return
// }
// if !(resp.Count > 0 && string(resp.Kvs[0].Value) == etcd.Enable) && req.Enable {
// go func() {
// time.Sleep(waitHttp) // wait for Restart http call return
// err := cm.resetConfig(c, false, clientv3.OpPut(etcd.BuildKey(etcd.EnableConfigCenterKey), enableStr))
// if err != nil {
// log.ZError(c, "resetConfig failed", err)
// }
// }()
// } else {
// _, err = cm.client.Put(c, etcd.BuildKey(etcd.EnableConfigCenterKey), enableStr)
// if err != nil {
// apiresp.GinError(c, errs.WrapMsg(err, "setEnableConfigManager failed"))
// return
// }
// }
//
// apiresp.GinSuccess(c, nil)
//}
//
//func (cm *ConfigManager) GetEnableConfigManager(c *gin.Context) {
// resp, err := cm.client.Get(c, etcd.BuildKey(etcd.EnableConfigCenterKey))
// if err != nil {
// apiresp.GinError(c, errs.WrapMsg(err, "getEnableConfigManager failed"))
// return
// }
// var enable bool
// if resp.Count > 0 && string(resp.Kvs[0].Value) == etcd.Enable {
// enable = true
// }
// apiresp.GinSuccess(c, &apistruct.GetEnableConfigManagerResp{Enable: enable})
//}
+15 -16
View File
@@ -16,58 +16,57 @@ package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/tools/a2r"
)
type ConversationApi struct {
Client conversation.ConversationClient
}
type ConversationApi rpcclient.Conversation
func NewConversationApi(client conversation.ConversationClient) ConversationApi {
return ConversationApi{client}
func NewConversationApi(client rpcclient.Conversation) ConversationApi {
return ConversationApi(client)
}
func (o *ConversationApi) GetAllConversations(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetAllConversations, o.Client)
a2r.Call(conversation.ConversationClient.GetAllConversations, o.Client, c)
}
func (o *ConversationApi) GetSortedConversationList(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetSortedConversationList, o.Client)
a2r.Call(conversation.ConversationClient.GetSortedConversationList, o.Client, c)
}
func (o *ConversationApi) GetConversation(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetConversation, o.Client)
a2r.Call(conversation.ConversationClient.GetConversation, o.Client, c)
}
func (o *ConversationApi) GetConversations(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetConversations, o.Client)
a2r.Call(conversation.ConversationClient.GetConversations, o.Client, c)
}
func (o *ConversationApi) SetConversations(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.SetConversations, o.Client)
a2r.Call(conversation.ConversationClient.SetConversations, o.Client, c)
}
func (o *ConversationApi) GetConversationOfflinePushUserIDs(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetConversationOfflinePushUserIDs, o.Client)
a2r.Call(conversation.ConversationClient.GetConversationOfflinePushUserIDs, o.Client, c)
}
func (o *ConversationApi) GetFullOwnerConversationIDs(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetFullOwnerConversationIDs, o.Client)
a2r.Call(conversation.ConversationClient.GetFullOwnerConversationIDs, o.Client, c)
}
func (o *ConversationApi) GetIncrementalConversation(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetIncrementalConversation, o.Client)
a2r.Call(conversation.ConversationClient.GetIncrementalConversation, o.Client, c)
}
func (o *ConversationApi) GetOwnerConversation(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetOwnerConversation, o.Client)
a2r.Call(conversation.ConversationClient.GetOwnerConversation, o.Client, c)
}
func (o *ConversationApi) GetNotNotifyConversationIDs(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetNotNotifyConversationIDs, o.Client)
a2r.Call(conversation.ConversationClient.GetNotNotifyConversationIDs, o.Client, c)
}
func (o *ConversationApi) GetPinnedConversationIDs(c *gin.Context) {
a2r.Call(c, conversation.ConversationClient.GetPinnedConversationIDs, o.Client)
a2r.Call(conversation.ConversationClient.GetPinnedConversationIDs, o.Client, c)
}
+25 -26
View File
@@ -17,100 +17,99 @@ package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/relation"
"github.com/openimsdk/tools/a2r"
)
type FriendApi struct {
Client relation.FriendClient
}
type FriendApi rpcclient.Friend
func NewFriendApi(client relation.FriendClient) FriendApi {
return FriendApi{client}
func NewFriendApi(client rpcclient.Friend) FriendApi {
return FriendApi(client)
}
func (o *FriendApi) ApplyToAddFriend(c *gin.Context) {
a2r.Call(c, relation.FriendClient.ApplyToAddFriend, o.Client)
a2r.Call(relation.FriendClient.ApplyToAddFriend, o.Client, c)
}
func (o *FriendApi) RespondFriendApply(c *gin.Context) {
a2r.Call(c, relation.FriendClient.RespondFriendApply, o.Client)
a2r.Call(relation.FriendClient.RespondFriendApply, o.Client, c)
}
func (o *FriendApi) DeleteFriend(c *gin.Context) {
a2r.Call(c, relation.FriendClient.DeleteFriend, o.Client)
a2r.Call(relation.FriendClient.DeleteFriend, o.Client, c)
}
func (o *FriendApi) GetFriendApplyList(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetPaginationFriendsApplyTo, o.Client)
a2r.Call(relation.FriendClient.GetPaginationFriendsApplyTo, o.Client, c)
}
func (o *FriendApi) GetDesignatedFriendsApply(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetDesignatedFriendsApply, o.Client)
a2r.Call(relation.FriendClient.GetDesignatedFriendsApply, o.Client, c)
}
func (o *FriendApi) GetSelfApplyList(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetPaginationFriendsApplyFrom, o.Client)
a2r.Call(relation.FriendClient.GetPaginationFriendsApplyFrom, o.Client, c)
}
func (o *FriendApi) GetFriendList(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetPaginationFriends, o.Client)
a2r.Call(relation.FriendClient.GetPaginationFriends, o.Client, c)
}
func (o *FriendApi) GetDesignatedFriends(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetDesignatedFriends, o.Client)
a2r.Call(relation.FriendClient.GetDesignatedFriends, o.Client, c)
}
func (o *FriendApi) SetFriendRemark(c *gin.Context) {
a2r.Call(c, relation.FriendClient.SetFriendRemark, o.Client)
a2r.Call(relation.FriendClient.SetFriendRemark, o.Client, c)
}
func (o *FriendApi) AddBlack(c *gin.Context) {
a2r.Call(c, relation.FriendClient.AddBlack, o.Client)
a2r.Call(relation.FriendClient.AddBlack, o.Client, c)
}
func (o *FriendApi) GetPaginationBlacks(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetPaginationBlacks, o.Client)
a2r.Call(relation.FriendClient.GetPaginationBlacks, o.Client, c)
}
func (o *FriendApi) GetSpecifiedBlacks(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetSpecifiedBlacks, o.Client)
a2r.Call(relation.FriendClient.GetSpecifiedBlacks, o.Client, c)
}
func (o *FriendApi) RemoveBlack(c *gin.Context) {
a2r.Call(c, relation.FriendClient.RemoveBlack, o.Client)
a2r.Call(relation.FriendClient.RemoveBlack, o.Client, c)
}
func (o *FriendApi) ImportFriends(c *gin.Context) {
a2r.Call(c, relation.FriendClient.ImportFriends, o.Client)
a2r.Call(relation.FriendClient.ImportFriends, o.Client, c)
}
func (o *FriendApi) IsFriend(c *gin.Context) {
a2r.Call(c, relation.FriendClient.IsFriend, o.Client)
a2r.Call(relation.FriendClient.IsFriend, o.Client, c)
}
func (o *FriendApi) GetFriendIDs(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetFriendIDs, o.Client)
a2r.Call(relation.FriendClient.GetFriendIDs, o.Client, c)
}
func (o *FriendApi) GetSpecifiedFriendsInfo(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetSpecifiedFriendsInfo, o.Client)
a2r.Call(relation.FriendClient.GetSpecifiedFriendsInfo, o.Client, c)
}
func (o *FriendApi) UpdateFriends(c *gin.Context) {
a2r.Call(c, relation.FriendClient.UpdateFriends, o.Client)
a2r.Call(relation.FriendClient.UpdateFriends, o.Client, c)
}
func (o *FriendApi) GetIncrementalFriends(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetIncrementalFriends, o.Client)
a2r.Call(relation.FriendClient.GetIncrementalFriends, o.Client, c)
}
// GetIncrementalBlacks is temporarily unused.
// Deprecated: This function is currently unused and may be removed in future versions.
func (o *FriendApi) GetIncrementalBlacks(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetIncrementalBlacks, o.Client)
a2r.Call(relation.FriendClient.GetIncrementalBlacks, o.Client, c)
}
func (o *FriendApi) GetFullFriendUserIDs(c *gin.Context) {
a2r.Call(c, relation.FriendClient.GetFullFriendUserIDs, o.Client)
a2r.Call(relation.FriendClient.GetFullFriendUserIDs, o.Client, c)
}
+40 -41
View File
@@ -16,152 +16,151 @@ package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/group"
"github.com/openimsdk/tools/a2r"
)
type GroupApi struct {
Client group.GroupClient
}
type GroupApi rpcclient.Group
func NewGroupApi(client group.GroupClient) GroupApi {
return GroupApi{client}
func NewGroupApi(client rpcclient.Group) GroupApi {
return GroupApi(client)
}
func (o *GroupApi) CreateGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.CreateGroup, o.Client)
a2r.Call(group.GroupClient.CreateGroup, o.Client, c)
}
func (o *GroupApi) SetGroupInfo(c *gin.Context) {
a2r.Call(c, group.GroupClient.SetGroupInfo, o.Client)
a2r.Call(group.GroupClient.SetGroupInfo, o.Client, c)
}
func (o *GroupApi) SetGroupInfoEx(c *gin.Context) {
a2r.Call(c, group.GroupClient.SetGroupInfoEx, o.Client)
a2r.Call(group.GroupClient.SetGroupInfoEx, o.Client, c)
}
func (o *GroupApi) JoinGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.JoinGroup, o.Client)
a2r.Call(group.GroupClient.JoinGroup, o.Client, c)
}
func (o *GroupApi) QuitGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.QuitGroup, o.Client)
a2r.Call(group.GroupClient.QuitGroup, o.Client, c)
}
func (o *GroupApi) ApplicationGroupResponse(c *gin.Context) {
a2r.Call(c, group.GroupClient.GroupApplicationResponse, o.Client)
a2r.Call(group.GroupClient.GroupApplicationResponse, o.Client, c)
}
func (o *GroupApi) TransferGroupOwner(c *gin.Context) {
a2r.Call(c, group.GroupClient.TransferGroupOwner, o.Client)
a2r.Call(group.GroupClient.TransferGroupOwner, o.Client, c)
}
func (o *GroupApi) GetRecvGroupApplicationList(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupApplicationList, o.Client)
a2r.Call(group.GroupClient.GetGroupApplicationList, o.Client, c)
}
func (o *GroupApi) GetUserReqGroupApplicationList(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetUserReqApplicationList, o.Client)
a2r.Call(group.GroupClient.GetUserReqApplicationList, o.Client, c)
}
func (o *GroupApi) GetGroupUsersReqApplicationList(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupUsersReqApplicationList, o.Client)
a2r.Call(group.GroupClient.GetGroupUsersReqApplicationList, o.Client, c)
}
func (o *GroupApi) GetSpecifiedUserGroupRequestInfo(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetSpecifiedUserGroupRequestInfo, o.Client)
a2r.Call(group.GroupClient.GetSpecifiedUserGroupRequestInfo, o.Client, c)
}
func (o *GroupApi) GetGroupsInfo(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupsInfo, o.Client)
//a2r.Call(c, group.GroupClient.GetGroupsInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupsInfo))
a2r.Call(group.GroupClient.GetGroupsInfo, o.Client, c)
//a2r.Call(group.GroupClient.GetGroupsInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupsInfo))
}
func (o *GroupApi) KickGroupMember(c *gin.Context) {
a2r.Call(c, group.GroupClient.KickGroupMember, o.Client)
a2r.Call(group.GroupClient.KickGroupMember, o.Client, c)
}
func (o *GroupApi) GetGroupMembersInfo(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupMembersInfo, o.Client)
//a2r.Call(c, group.GroupClient.GetGroupMembersInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupMembersInfo))
a2r.Call(group.GroupClient.GetGroupMembersInfo, o.Client, c)
//a2r.Call(group.GroupClient.GetGroupMembersInfo, o.Client, c, a2r.NewNilReplaceOption(group.GroupClient.GetGroupMembersInfo))
}
func (o *GroupApi) GetGroupMemberList(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupMemberList, o.Client)
a2r.Call(group.GroupClient.GetGroupMemberList, o.Client, c)
}
func (o *GroupApi) InviteUserToGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.InviteUserToGroup, o.Client)
a2r.Call(group.GroupClient.InviteUserToGroup, o.Client, c)
}
func (o *GroupApi) GetJoinedGroupList(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetJoinedGroupList, o.Client)
a2r.Call(group.GroupClient.GetJoinedGroupList, o.Client, c)
}
func (o *GroupApi) DismissGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.DismissGroup, o.Client)
a2r.Call(group.GroupClient.DismissGroup, o.Client, c)
}
func (o *GroupApi) MuteGroupMember(c *gin.Context) {
a2r.Call(c, group.GroupClient.MuteGroupMember, o.Client)
a2r.Call(group.GroupClient.MuteGroupMember, o.Client, c)
}
func (o *GroupApi) CancelMuteGroupMember(c *gin.Context) {
a2r.Call(c, group.GroupClient.CancelMuteGroupMember, o.Client)
a2r.Call(group.GroupClient.CancelMuteGroupMember, o.Client, c)
}
func (o *GroupApi) MuteGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.MuteGroup, o.Client)
a2r.Call(group.GroupClient.MuteGroup, o.Client, c)
}
func (o *GroupApi) CancelMuteGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.CancelMuteGroup, o.Client)
a2r.Call(group.GroupClient.CancelMuteGroup, o.Client, c)
}
func (o *GroupApi) SetGroupMemberInfo(c *gin.Context) {
a2r.Call(c, group.GroupClient.SetGroupMemberInfo, o.Client)
a2r.Call(group.GroupClient.SetGroupMemberInfo, o.Client, c)
}
func (o *GroupApi) GetGroupAbstractInfo(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupAbstractInfo, o.Client)
a2r.Call(group.GroupClient.GetGroupAbstractInfo, o.Client, c)
}
// func (g *Group) SetGroupMemberNickname(c *gin.Context) {
// a2r.Call(c, group.GroupClient.SetGroupMemberNickname, g.userClient)
// a2r.Call(group.GroupClient.SetGroupMemberNickname, g.userClient, c)
//}
//
// func (g *Group) GetGroupAllMemberList(c *gin.Context) {
// a2r.Call(c, group.GroupClient.GetGroupAllMember, g.userClient)
// a2r.Call(group.GroupClient.GetGroupAllMember, g.userClient, c)
//}
func (o *GroupApi) GroupCreateCount(c *gin.Context) {
a2r.Call(c, group.GroupClient.GroupCreateCount, o.Client)
a2r.Call(group.GroupClient.GroupCreateCount, o.Client, c)
}
func (o *GroupApi) GetGroups(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroups, o.Client)
a2r.Call(group.GroupClient.GetGroups, o.Client, c)
}
func (o *GroupApi) GetGroupMemberUserIDs(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetGroupMemberUserIDs, o.Client)
a2r.Call(group.GroupClient.GetGroupMemberUserIDs, o.Client, c)
}
func (o *GroupApi) GetIncrementalJoinGroup(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetIncrementalJoinGroup, o.Client)
a2r.Call(group.GroupClient.GetIncrementalJoinGroup, o.Client, c)
}
func (o *GroupApi) GetIncrementalGroupMember(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetIncrementalGroupMember, o.Client)
a2r.Call(group.GroupClient.GetIncrementalGroupMember, o.Client, c)
}
func (o *GroupApi) GetIncrementalGroupMemberBatch(c *gin.Context) {
a2r.Call(c, group.GroupClient.BatchGetIncrementalGroupMember, o.Client)
a2r.Call(group.GroupClient.BatchGetIncrementalGroupMember, o.Client, c)
}
func (o *GroupApi) GetFullGroupMemberUserIDs(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetFullGroupMemberUserIDs, o.Client)
a2r.Call(group.GroupClient.GetFullGroupMemberUserIDs, o.Client, c)
}
func (o *GroupApi) GetFullJoinGroupIDs(c *gin.Context) {
a2r.Call(c, group.GroupClient.GetFullJoinGroupIDs, o.Client)
a2r.Call(group.GroupClient.GetFullJoinGroupIDs, o.Client, c)
}
+31 -72
View File
@@ -1,9 +1,25 @@
// Copyright © 2023 OpenIM. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package api
import (
"context"
"errors"
"fmt"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/tools/utils/network"
"net"
"net/http"
"os"
@@ -12,21 +28,12 @@ import (
"syscall"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/tools/mw"
"github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/tools/utils/network"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/discovery/etcd"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/system/program"
"github.com/openimsdk/tools/utils/jsonutil"
)
type Config struct {
@@ -35,8 +42,8 @@ type Config struct {
Discovery config.Discovery
}
func Start(ctx context.Context, index int, cfg *Config) error {
apiPort, err := datautil.GetElemByIndex(cfg.API.Api.Ports, index)
func Start(ctx context.Context, index int, config *Config) error {
apiPort, err := datautil.GetElemByIndex(config.API.Api.Ports, index)
if err != nil {
return err
}
@@ -44,14 +51,10 @@ func Start(ctx context.Context, index int, cfg *Config) error {
var client discovery.SvcDiscoveryRegistry
// Determine whether zk is passed according to whether it is a clustered deployment
client, err = kdisc.NewDiscoveryRegister(&cfg.Discovery, &cfg.Share, []string{
cfg.Share.RpcRegisterName.MessageGateway,
})
client, err = kdisc.NewDiscoveryRegister(&config.Discovery, &config.Share)
if err != nil {
return errs.WrapMsg(err, "failed to register discovery service")
}
client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
var (
netDone = make(chan struct{}, 1)
@@ -59,73 +62,29 @@ func Start(ctx context.Context, index int, cfg *Config) error {
prometheusPort int
)
router, err := newGinRouter(ctx, client, cfg)
if err != nil {
return err
}
registerIP, err := network.GetRpcRegisterIP("")
if err != nil {
return err
}
getAutoPort := func() (net.Listener, int, error) {
registerAddr := net.JoinHostPort(registerIP, "0")
listener, err := net.Listen("tcp", registerAddr)
if err != nil {
return nil, 0, errs.WrapMsg(err, "listen err", "registerAddr", registerAddr)
}
_, portStr, _ := net.SplitHostPort(listener.Addr().String())
port, _ := strconv.Atoi(portStr)
return listener, port, nil
}
if cfg.API.Prometheus.AutoSetPorts && cfg.Discovery.Enable != config.ETCD {
return errs.New("only etcd support autoSetPorts", "RegisterName", "api").Wrap()
}
if cfg.API.Prometheus.Enable {
var (
listener net.Listener
)
if cfg.API.Prometheus.AutoSetPorts {
listener, prometheusPort, err = getAutoPort()
if err != nil {
return err
}
etcdClient := client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
_, err = etcdClient.Put(ctx, prommetrics.BuildDiscoveryKey(prommetrics.APIKeyName), jsonutil.StructToJsonString(prommetrics.BuildDefaultTarget(registerIP, prometheusPort)))
if err != nil {
return errs.WrapMsg(err, "etcd put err")
}
} else {
prometheusPort, err = datautil.GetElemByIndex(cfg.API.Prometheus.Ports, index)
if err != nil {
return err
}
listener, err = net.Listen("tcp", fmt.Sprintf(":%d", prometheusPort))
if err != nil {
return errs.WrapMsg(err, "listen err", "addr", fmt.Sprintf(":%d", prometheusPort))
}
}
router := newGinRouter(client, config)
if config.API.Prometheus.Enable {
go func() {
if err := prommetrics.ApiInit(listener); err != nil && !errors.Is(err, http.ErrServerClosed) {
prometheusPort, err = datautil.GetElemByIndex(config.API.Prometheus.Ports, index)
if err != nil {
netErr = err
netDone <- struct{}{}
return
}
if err := prommetrics.ApiInit(prometheusPort); err != nil && err != http.ErrServerClosed {
netErr = errs.WrapMsg(err, fmt.Sprintf("api prometheus start err: %d", prometheusPort))
netDone <- struct{}{}
}
}()
}
address := net.JoinHostPort(network.GetListenIP(cfg.API.Api.ListenIP), strconv.Itoa(apiPort))
address := net.JoinHostPort(network.GetListenIP(config.API.Api.ListenIP), strconv.Itoa(apiPort))
server := http.Server{Addr: address, Handler: router}
log.CInfo(ctx, "API server is initializing", "address", address, "apiPort", apiPort, "prometheusPort", prometheusPort)
go func() {
err = server.ListenAndServe()
if err != nil && !errors.Is(err, http.ErrServerClosed) {
if err != nil && err != http.ErrServerClosed {
netErr = errs.WrapMsg(err, fmt.Sprintf("api start err: %s", server.Addr))
netDone <- struct{}{}
+49 -36
View File
@@ -2,17 +2,17 @@ package jssdk
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"sort"
"github.com/gin-gonic/gin"
"github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/group"
"github.com/openimsdk/protocol/jssdk"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/protocol/user"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/utils/datautil"
"sort"
)
const (
@@ -20,23 +20,22 @@ const (
defaultGetActiveConversation = 100
)
func NewJSSdkApi(userClient *rpcli.UserClient, relationClient *rpcli.RelationClient, groupClient *rpcli.GroupClient,
conversationClient *rpcli.ConversationClient, msgClient *rpcli.MsgClient) *JSSdk {
func NewJSSdkApi(user user.UserClient, friend relation.FriendClient, group group.GroupClient, msg msg.MsgClient, conv conversation.ConversationClient) *JSSdk {
return &JSSdk{
userClient: userClient,
relationClient: relationClient,
groupClient: groupClient,
conversationClient: conversationClient,
msgClient: msgClient,
user: user,
friend: friend,
group: group,
msg: msg,
conv: conv,
}
}
type JSSdk struct {
userClient *rpcli.UserClient
relationClient *rpcli.RelationClient
groupClient *rpcli.GroupClient
conversationClient *rpcli.ConversationClient
msgClient *rpcli.MsgClient
user user.UserClient
friend relation.FriendClient
group group.GroupClient
msg msg.MsgClient
conv conversation.ConversationClient
}
func (x *JSSdk) GetActiveConversations(c *gin.Context) {
@@ -68,11 +67,11 @@ func (x *JSSdk) fillConversations(ctx context.Context, conversations []*jssdk.Co
groupMap map[string]*sdkws.GroupInfo
)
if len(userIDs) > 0 {
users, err := x.userClient.GetUsersInfo(ctx, userIDs)
users, err := field(ctx, x.user.GetDesignateUsers, &user.GetDesignateUsersReq{UserIDs: userIDs}, (*user.GetDesignateUsersResp).GetUsersInfo)
if err != nil {
return err
}
friends, err := x.relationClient.GetFriendsInfo(ctx, conversations[0].Conversation.OwnerUserID, userIDs)
friends, err := field(ctx, x.friend.GetFriendInfo, &relation.GetFriendInfoReq{OwnerUserID: conversations[0].Conversation.OwnerUserID, FriendUserIDs: userIDs}, (*relation.GetFriendInfoResp).GetFriendInfos)
if err != nil {
return err
}
@@ -80,11 +79,11 @@ func (x *JSSdk) fillConversations(ctx context.Context, conversations []*jssdk.Co
friendMap = datautil.SliceToMap(friends, (*relation.FriendInfoOnly).GetFriendUserID)
}
if len(groupIDs) > 0 {
groups, err := x.groupClient.GetGroupsInfo(ctx, groupIDs)
resp, err := x.group.GetGroupsInfo(ctx, &group.GetGroupsInfoReq{GroupIDs: groupIDs})
if err != nil {
return err
}
groupMap = datautil.SliceToMap(groups, (*sdkws.GroupInfo).GetGroupID)
groupMap = datautil.SliceToMap(resp.GroupInfos, (*sdkws.GroupInfo).GetGroupID)
}
for _, c := range conversations {
if c.Conversation.GroupID == "" {
@@ -102,18 +101,21 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
req.Count = defaultGetActiveConversation
}
req.OwnerUserID = mcontext.GetOpUserID(ctx)
conversationIDs, err := x.conversationClient.GetConversationIDs(ctx, req.OwnerUserID)
conversationIDs, err := field(ctx, x.conv.GetConversationIDs,
&conversation.GetConversationIDsReq{UserID: req.OwnerUserID}, (*conversation.GetConversationIDsResp).GetConversationIDs)
if err != nil {
return nil, err
}
if len(conversationIDs) == 0 {
return &jssdk.GetActiveConversationsResp{}, nil
}
readSeq, err := x.msgClient.GetHasReadSeqs(ctx, conversationIDs, req.OwnerUserID)
readSeq, err := field(ctx, x.msg.GetHasReadSeqs,
&msg.GetHasReadSeqsReq{UserID: req.OwnerUserID, ConversationIDs: conversationIDs}, (*msg.SeqsInfoResp).GetMaxSeqs)
if err != nil {
return nil, err
}
activeConversation, err := x.msgClient.GetActiveConversation(ctx, conversationIDs)
activeConversation, err := field(ctx, x.msg.GetActiveConversation,
&msg.GetActiveConversationReq{ConversationIDs: conversationIDs}, (*msg.GetActiveConversationResp).GetConversations)
if err != nil {
return nil, err
}
@@ -124,7 +126,8 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
Conversation: activeConversation,
}
if len(activeConversation) > 1 {
pinnedConversationIDs, err := x.conversationClient.GetPinnedConversationIDs(ctx, req.OwnerUserID)
pinnedConversationIDs, err := field(ctx, x.conv.GetPinnedConversationIDs,
&conversation.GetPinnedConversationIDsReq{UserID: req.OwnerUserID}, (*conversation.GetPinnedConversationIDsResp).GetConversationIDs)
if err != nil {
return nil, err
}
@@ -132,18 +135,25 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
}
sort.Sort(&sortConversations)
sortList := sortConversations.Top(int(req.Count))
conversations, err := x.conversationClient.GetConversations(ctx, datautil.Slice(sortList, func(c *msg.ActiveConversation) string {
return c.ConversationID
}), req.OwnerUserID)
conversations, err := field(ctx, x.conv.GetConversations,
&conversation.GetConversationsReq{
OwnerUserID: req.OwnerUserID,
ConversationIDs: datautil.Slice(sortList, func(c *msg.ActiveConversation) string {
return c.ConversationID
})}, (*conversation.GetConversationsResp).GetConversations)
if err != nil {
return nil, err
}
msgs, err := x.msgClient.GetSeqMessage(ctx, req.OwnerUserID, datautil.Slice(sortList, func(c *msg.ActiveConversation) *msg.ConversationSeqs {
return &msg.ConversationSeqs{
ConversationID: c.ConversationID,
Seqs: []int64{c.MaxSeq},
}
}))
msgs, err := field(ctx, x.msg.GetSeqMessage,
&msg.GetSeqMessageReq{
UserID: req.OwnerUserID,
Conversations: datautil.Slice(sortList, func(c *msg.ActiveConversation) *msg.ConversationSeqs {
return &msg.ConversationSeqs{
ConversationID: c.ConversationID,
Seqs: []int64{c.MaxSeq},
}
}),
}, (*msg.GetSeqMessageResp).GetMsgs)
if err != nil {
return nil, err
}
@@ -185,7 +195,7 @@ func (x *JSSdk) getActiveConversations(ctx context.Context, req *jssdk.GetActive
func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversationsReq) (*jssdk.GetConversationsResp, error) {
req.OwnerUserID = mcontext.GetOpUserID(ctx)
conversations, err := x.conversationClient.GetConversations(ctx, req.ConversationIDs, req.OwnerUserID)
conversations, err := field(ctx, x.conv.GetConversations, &conversation.GetConversationsReq{OwnerUserID: req.OwnerUserID, ConversationIDs: req.ConversationIDs}, (*conversation.GetConversationsResp).GetConversations)
if err != nil {
return nil, err
}
@@ -195,11 +205,13 @@ func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversation
req.ConversationIDs = datautil.Slice(conversations, func(c *conversation.Conversation) string {
return c.ConversationID
})
maxSeqs, err := x.msgClient.GetMaxSeqs(ctx, req.ConversationIDs)
maxSeqs, err := field(ctx, x.msg.GetMaxSeqs,
&msg.GetMaxSeqsReq{ConversationIDs: req.ConversationIDs}, (*msg.SeqsInfoResp).GetMaxSeqs)
if err != nil {
return nil, err
}
readSeqs, err := x.msgClient.GetHasReadSeqs(ctx, req.ConversationIDs, req.OwnerUserID)
readSeqs, err := field(ctx, x.msg.GetHasReadSeqs,
&msg.GetHasReadSeqsReq{UserID: req.OwnerUserID, ConversationIDs: req.ConversationIDs}, (*msg.SeqsInfoResp).GetMaxSeqs)
if err != nil {
return nil, err
}
@@ -214,7 +226,8 @@ func (x *JSSdk) getConversations(ctx context.Context, req *jssdk.GetConversation
}
var msgs map[string]*sdkws.PullMsgs
if len(conversationSeqs) > 0 {
msgs, err = x.msgClient.GetSeqMessage(ctx, req.OwnerUserID, conversationSeqs)
msgs, err = field(ctx, x.msg.GetSeqMessage,
&msg.GetSeqMessageReq{UserID: req.OwnerUserID, Conversations: conversationSeqs}, (*msg.GetSeqMessageResp).GetMsgs)
if err != nil {
return nil, err
}
+34 -30
View File
@@ -21,7 +21,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/apistruct"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/sdkws"
@@ -37,14 +37,16 @@ import (
)
type MessageApi struct {
Client msg.MsgClient
userClient *rpcli.UserClient
imAdminUserID []string
*rpcclient.Message
validate *validator.Validate
userRpcClient *rpcclient.UserRpcClient
imAdminUserID []string
}
func NewMessageApi(client msg.MsgClient, userClient *rpcli.UserClient, imAdminUserID []string) MessageApi {
return MessageApi{Client: client, userClient: userClient, imAdminUserID: imAdminUserID, validate: validator.New()}
func NewMessageApi(msgRpcClient *rpcclient.Message, userRpcClient *rpcclient.User,
imAdminUserID []string) MessageApi {
return MessageApi{Message: msgRpcClient, validate: validator.New(),
userRpcClient: rpcclient.NewUserRpcClientByUser(userRpcClient), imAdminUserID: imAdminUserID}
}
func (*MessageApi) SetOptions(options map[string]bool, value bool) {
@@ -106,51 +108,51 @@ func (m *MessageApi) newUserSendMsgReq(_ *gin.Context, params *apistruct.SendMsg
}
func (m *MessageApi) GetSeq(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetMaxSeq, m.Client)
a2r.Call(msg.MsgClient.GetMaxSeq, m.Client, c)
}
func (m *MessageApi) PullMsgBySeqs(c *gin.Context) {
a2r.Call(c, msg.MsgClient.PullMessageBySeqs, m.Client)
a2r.Call(msg.MsgClient.PullMessageBySeqs, m.Client, c)
}
func (m *MessageApi) RevokeMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.RevokeMsg, m.Client)
a2r.Call(msg.MsgClient.RevokeMsg, m.Client, c)
}
func (m *MessageApi) MarkMsgsAsRead(c *gin.Context) {
a2r.Call(c, msg.MsgClient.MarkMsgsAsRead, m.Client)
a2r.Call(msg.MsgClient.MarkMsgsAsRead, m.Client, c)
}
func (m *MessageApi) MarkConversationAsRead(c *gin.Context) {
a2r.Call(c, msg.MsgClient.MarkConversationAsRead, m.Client)
a2r.Call(msg.MsgClient.MarkConversationAsRead, m.Client, c)
}
func (m *MessageApi) GetConversationsHasReadAndMaxSeq(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetConversationsHasReadAndMaxSeq, m.Client)
a2r.Call(msg.MsgClient.GetConversationsHasReadAndMaxSeq, m.Client, c)
}
func (m *MessageApi) SetConversationHasReadSeq(c *gin.Context) {
a2r.Call(c, msg.MsgClient.SetConversationHasReadSeq, m.Client)
a2r.Call(msg.MsgClient.SetConversationHasReadSeq, m.Client, c)
}
func (m *MessageApi) ClearConversationsMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.ClearConversationsMsg, m.Client)
a2r.Call(msg.MsgClient.ClearConversationsMsg, m.Client, c)
}
func (m *MessageApi) UserClearAllMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.UserClearAllMsg, m.Client)
a2r.Call(msg.MsgClient.UserClearAllMsg, m.Client, c)
}
func (m *MessageApi) DeleteMsgs(c *gin.Context) {
a2r.Call(c, msg.MsgClient.DeleteMsgs, m.Client)
a2r.Call(msg.MsgClient.DeleteMsgs, m.Client, c)
}
func (m *MessageApi) DeleteMsgPhysicalBySeq(c *gin.Context) {
a2r.Call(c, msg.MsgClient.DeleteMsgPhysicalBySeq, m.Client)
a2r.Call(msg.MsgClient.DeleteMsgPhysicalBySeq, m.Client, c)
}
func (m *MessageApi) DeleteMsgPhysical(c *gin.Context) {
a2r.Call(c, msg.MsgClient.DeleteMsgPhysical, m.Client)
a2r.Call(msg.MsgClient.DeleteMsgPhysical, m.Client, c)
}
func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendMsgReq *msg.SendMsgReq, err error) {
@@ -171,10 +173,12 @@ func (m *MessageApi) getSendMsgReq(c *gin.Context, req apistruct.SendMsg) (sendM
data = apistruct.AtElem{}
case constant.Custom:
data = apistruct.CustomElem{}
case constant.Stream:
data = apistruct.StreamMsgElem{}
case constant.OANotification:
data = apistruct.OANotificationElem{}
req.SessionType = constant.NotificationChatType
if err = m.userClient.GetNotificationByID(c, req.SendID); err != nil {
if err = m.userRpcClient.GetNotificationByID(c, req.SendID); err != nil {
return nil, err
}
default:
@@ -308,10 +312,10 @@ func (m *MessageApi) BatchSendMsg(c *gin.Context) {
var recvIDs []string
if req.IsSendAll {
var pageNumber int32 = 1
const showNumber = 500
pageNumber := 1
showNumber := 500
for {
recvIDsPart, err := m.userClient.GetAllUserIDs(c, pageNumber, showNumber)
recvIDsPart, err := m.userRpcClient.GetAllUserIDs(c, int32(pageNumber), int32(showNumber))
if err != nil {
apiresp.GinError(c, err)
return
@@ -349,33 +353,33 @@ func (m *MessageApi) BatchSendMsg(c *gin.Context) {
}
func (m *MessageApi) CheckMsgIsSendSuccess(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetSendMsgStatus, m.Client)
a2r.Call(msg.MsgClient.GetSendMsgStatus, m.Client, c)
}
func (m *MessageApi) GetUsersOnlineStatus(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetSendMsgStatus, m.Client)
a2r.Call(msg.MsgClient.GetSendMsgStatus, m.Client, c)
}
func (m *MessageApi) GetActiveUser(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetActiveUser, m.Client)
a2r.Call(msg.MsgClient.GetActiveUser, m.Client, c)
}
func (m *MessageApi) GetActiveGroup(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetActiveGroup, m.Client)
a2r.Call(msg.MsgClient.GetActiveGroup, m.Client, c)
}
func (m *MessageApi) SearchMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.SearchMessage, m.Client)
a2r.Call(msg.MsgClient.SearchMessage, m.Client, c)
}
func (m *MessageApi) GetServerTime(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
a2r.Call(msg.MsgClient.GetServerTime, m.Client, c)
}
func (m *MessageApi) GetStreamMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
a2r.Call(msg.MsgClient.GetStreamMsg, m.Client, c)
}
func (m *MessageApi) AppendStreamMsg(c *gin.Context) {
a2r.Call(c, msg.MsgClient.GetServerTime, m.Client)
a2r.Call(msg.MsgClient.AppendStreamMsg, m.Client, c)
}
-114
View File
@@ -1,114 +0,0 @@
package api
import (
"encoding/json"
"net/http"
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/tools/apiresp"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/discovery/etcd"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
clientv3 "go.etcd.io/etcd/client/v3"
)
type PrometheusDiscoveryApi struct {
config *Config
client *clientv3.Client
}
func NewPrometheusDiscoveryApi(cfg *Config, client discovery.SvcDiscoveryRegistry) *PrometheusDiscoveryApi {
api := &PrometheusDiscoveryApi{
config: cfg,
}
if cfg.Discovery.Enable == config.ETCD {
api.client = client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
}
return api
}
func (p *PrometheusDiscoveryApi) Enable(c *gin.Context) {
if p.config.Discovery.Enable != config.ETCD {
c.JSON(http.StatusOK, []struct{}{})
c.Abort()
}
}
func (p *PrometheusDiscoveryApi) discovery(c *gin.Context, key string) {
eResp, err := p.client.Get(c, prommetrics.BuildDiscoveryKey(key))
if err != nil {
// Log and respond with an error if preparation fails.
apiresp.GinError(c, errs.WrapMsg(err, "etcd get err"))
return
}
if len(eResp.Kvs) == 0 {
c.JSON(http.StatusOK, []*prommetrics.Target{})
}
var (
resp = &prommetrics.RespTarget{
Targets: make([]string, 0, len(eResp.Kvs)),
}
)
for i := range eResp.Kvs {
var target prommetrics.Target
err = json.Unmarshal(eResp.Kvs[i].Value, &target)
if err != nil {
log.ZError(c, "prometheus unmarshal err", errs.Wrap(err))
}
resp.Targets = append(resp.Targets, target.Target)
if resp.Labels == nil {
resp.Labels = target.Labels
}
}
c.JSON(200, []*prommetrics.RespTarget{resp})
}
func (p *PrometheusDiscoveryApi) Api(c *gin.Context) {
p.discovery(c, prommetrics.APIKeyName)
}
func (p *PrometheusDiscoveryApi) User(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.User)
}
func (p *PrometheusDiscoveryApi) Group(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Group)
}
func (p *PrometheusDiscoveryApi) Msg(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Msg)
}
func (p *PrometheusDiscoveryApi) Friend(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Friend)
}
func (p *PrometheusDiscoveryApi) Conversation(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Conversation)
}
func (p *PrometheusDiscoveryApi) Third(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Third)
}
func (p *PrometheusDiscoveryApi) Auth(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Auth)
}
func (p *PrometheusDiscoveryApi) Push(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.Push)
}
func (p *PrometheusDiscoveryApi) MessageGateway(c *gin.Context) {
p.discovery(c, p.config.Share.RpcRegisterName.MessageGateway)
}
func (p *PrometheusDiscoveryApi) MessageTransfer(c *gin.Context) {
p.discovery(c, prommetrics.MessageTransferKeyName)
}
+45 -83
View File
@@ -1,18 +1,7 @@
package api
import (
"context"
"net/http"
"strings"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
pbAuth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/group"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/protocol/user"
"fmt"
"github.com/openimsdk/open-im-server/v3/internal/api/jssdk"
@@ -21,8 +10,15 @@ import (
"github.com/gin-gonic/gin"
"github.com/gin-gonic/gin/binding"
"github.com/go-playground/validator/v10"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"net/http"
"strings"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/tools/apiresp"
"github.com/openimsdk/tools/discovery"
@@ -52,40 +48,23 @@ func prommetricsGin() gin.HandlerFunc {
}
}
func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, config *Config) (*gin.Engine, error) {
authConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Auth)
if err != nil {
return nil, err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return nil, err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return nil, err
}
friendConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Friend)
if err != nil {
return nil, err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return nil, err
}
thirdConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Third)
if err != nil {
return nil, err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return nil, err
}
func newGinRouter(disCov discovery.SvcDiscoveryRegistry, config *Config) *gin.Engine {
disCov.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
gin.SetMode(gin.ReleaseMode)
r := gin.New()
if v, ok := binding.Validator.Engine().(*validator.Validate); ok {
_ = v.RegisterValidation("required_if", RequiredIf)
}
// init rpc client here
userRpc := rpcclient.NewUser(disCov, config.Share.RpcRegisterName.User, config.Share.RpcRegisterName.MessageGateway,
config.Share.IMAdminUserID)
groupRpc := rpcclient.NewGroup(disCov, config.Share.RpcRegisterName.Group)
friendRpc := rpcclient.NewFriend(disCov, config.Share.RpcRegisterName.Friend)
messageRpc := rpcclient.NewMessage(disCov, config.Share.RpcRegisterName.Msg)
conversationRpc := rpcclient.NewConversation(disCov, config.Share.RpcRegisterName.Conversation)
authRpc := rpcclient.NewAuth(disCov, config.Share.RpcRegisterName.Auth)
thirdRpc := rpcclient.NewThird(disCov, config.Share.RpcRegisterName.Third, config.API.Prometheus.GrafanaURL)
switch config.API.Api.CompressionLevel {
case NoCompression:
case DefaultCompression:
@@ -95,9 +74,10 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
case BestSpeed:
r.Use(gzip.Gzip(gzip.BestSpeed))
}
r.Use(prommetricsGin(), gin.RecoveryWithWriter(gin.DefaultErrorWriter, mw.GinPanicErr), mw.CorsHandler(), mw.GinParseOperationID(), GinParseToken(rpcli.NewAuthClient(authConn)))
u := NewUserApi(user.NewUserClient(userConn), client, config.Share.RpcRegisterName)
m := NewMessageApi(msg.NewMsgClient(msgConn), rpcli.NewUserClient(userConn), config.Share.IMAdminUserID)
r.Use(prommetricsGin(), gin.RecoveryWithWriter(gin.DefaultErrorWriter, mw.GinPanicErr), mw.CorsHandler(), mw.GinParseOperationID(), GinParseToken(authRpc))
u := NewUserApi(*userRpc)
m := NewMessageApi(messageRpc, userRpc, config.Share.IMAdminUserID)
j := jssdk.NewJSSdkApi(userRpc.Client, friendRpc.Client, groupRpc.Client, messageRpc.Client, conversationRpc.Client)
userRouterGroup := r.Group("/user")
{
userRouterGroup.POST("/user_register", u.UserRegister)
@@ -125,9 +105,9 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
userRouterGroup.POST("/search_notification_account", u.SearchNotificationAccount)
}
// friend routing group
friendRouterGroup := r.Group("/friend")
{
f := NewFriendApi(relation.NewFriendClient(friendConn))
friendRouterGroup := r.Group("/friend")
f := NewFriendApi(*friendRpc)
friendRouterGroup.POST("/delete_friend", f.DeleteFriend)
friendRouterGroup.POST("/get_friend_apply_list", f.GetFriendApplyList)
friendRouterGroup.POST("/get_designated_friend_apply", f.GetDesignatedFriendsApply)
@@ -150,10 +130,9 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
friendRouterGroup.POST("/get_incremental_friends", f.GetIncrementalFriends)
friendRouterGroup.POST("/get_full_friend_user_ids", f.GetFullFriendUserIDs)
}
g := NewGroupApi(group.NewGroupClient(groupConn))
g := NewGroupApi(*groupRpc)
groupRouterGroup := r.Group("/group")
{
groupRouterGroup := r.Group("/group")
groupRouterGroup.POST("/create_group", g.CreateGroup)
groupRouterGroup.POST("/set_group_info", g.SetGroupInfo)
groupRouterGroup.POST("/set_group_info_ex", g.SetGroupInfoEx)
@@ -187,19 +166,18 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
groupRouterGroup.POST("/get_full_join_group_ids", g.GetFullJoinGroupIDs)
}
// certificate
authRouterGroup := r.Group("/auth")
{
a := NewAuthApi(pbAuth.NewAuthClient(authConn))
authRouterGroup := r.Group("/auth")
a := NewAuthApi(*authRpc)
authRouterGroup.POST("/get_admin_token", a.GetAdminToken)
authRouterGroup.POST("/get_user_token", a.GetUserToken)
authRouterGroup.POST("/parse_token", a.ParseToken)
authRouterGroup.POST("/force_logout", a.ForceLogout)
}
// Third service
thirdGroup := r.Group("/third")
{
t := NewThirdApi(third.NewThirdClient(thirdConn), config.API.Prometheus.GrafanaURL)
thirdGroup := r.Group("/third")
t := NewThirdApi(*thirdRpc)
thirdGroup.GET("/prometheus", t.GetPrometheus)
thirdGroup.POST("/fcm_update_token", t.FcmUpdateToken)
thirdGroup.POST("/set_app_badge", t.SetAppBadge)
@@ -222,8 +200,8 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
objectGroup.GET("/*name", t.ObjectRedirect)
}
// Message
msgGroup := r.Group("/msg")
{
msgGroup := r.Group("/msg")
msgGroup.POST("/newest_seq", m.GetSeq)
msgGroup.POST("/search_msg", m.SearchMsg)
msgGroup.POST("/send_msg", m.SendMessage)
@@ -244,11 +222,13 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
msgGroup.POST("/batch_send_msg", m.BatchSendMsg)
msgGroup.POST("/check_msg_is_send_success", m.CheckMsgIsSendSuccess)
msgGroup.POST("/get_server_time", m.GetServerTime)
msgGroup.POST("/get_stream_msg", m.GetStreamMsg)
msgGroup.POST("/append_stream_msg", m.AppendStreamMsg)
}
// Conversation
conversationGroup := r.Group("/conversation")
{
c := NewConversationApi(conversation.NewConversationClient(conversationConn))
conversationGroup := r.Group("/conversation")
c := NewConversationApi(*conversationRpc)
conversationGroup.POST("/get_sorted_conversation_list", c.GetSortedConversationList)
conversationGroup.POST("/get_all_conversations", c.GetAllConversations)
conversationGroup.POST("/get_conversation", c.GetConversation)
@@ -262,40 +242,22 @@ func newGinRouter(ctx context.Context, client discovery.SvcDiscoveryRegistry, co
conversationGroup.POST("/get_pinned_conversation_ids", c.GetPinnedConversationIDs)
}
statisticsGroup := r.Group("/statistics")
{
statisticsGroup := r.Group("/statistics")
statisticsGroup.POST("/user/register", u.UserRegisterCount)
statisticsGroup.POST("/user/active", m.GetActiveUser)
statisticsGroup.POST("/group/create", g.GroupCreateCount)
statisticsGroup.POST("/group/active", m.GetActiveGroup)
}
{
j := jssdk.NewJSSdkApi(rpcli.NewUserClient(userConn), rpcli.NewRelationClient(friendConn),
rpcli.NewGroupClient(groupConn), rpcli.NewConversationClient(conversationConn), rpcli.NewMsgClient(msgConn))
jssdk := r.Group("/jssdk")
jssdk.POST("/get_conversations", j.GetConversations)
jssdk.POST("/get_active_conversations", j.GetActiveConversations)
}
{
pd := NewPrometheusDiscoveryApi(config, client)
proDiscoveryGroup := r.Group("/prometheus_discovery", pd.Enable)
proDiscoveryGroup.GET("/api", pd.Api)
proDiscoveryGroup.GET("/user", pd.User)
proDiscoveryGroup.GET("/group", pd.Group)
proDiscoveryGroup.GET("/msg", pd.Msg)
proDiscoveryGroup.GET("/friend", pd.Friend)
proDiscoveryGroup.GET("/conversation", pd.Conversation)
proDiscoveryGroup.GET("/third", pd.Third)
proDiscoveryGroup.GET("/auth", pd.Auth)
proDiscoveryGroup.GET("/push", pd.Push)
proDiscoveryGroup.GET("/msg_gateway", pd.MessageGateway)
proDiscoveryGroup.GET("/msg_transfer", pd.MessageTransfer)
}
return r, nil
jssdk := r.Group("/jssdk")
jssdk.POST("/get_conversations", j.GetConversations)
jssdk.POST("/get_active_conversations", j.GetActiveConversations)
return r
}
func GinParseToken(authClient *rpcli.AuthClient) gin.HandlerFunc {
func GinParseToken(authRPC *rpcclient.Auth) gin.HandlerFunc {
return func(c *gin.Context) {
switch c.Request.Method {
case http.MethodPost:
@@ -313,7 +275,7 @@ func GinParseToken(authClient *rpcli.AuthClient) gin.HandlerFunc {
c.Abort()
return
}
resp, err := authClient.ParseToken(c, token)
resp, err := authRPC.ParseToken(c, token)
if err != nil {
apiresp.GinError(c, err)
c.Abort()
+32
View File
@@ -0,0 +1,32 @@
// Copyright © 2023 OpenIM. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/user"
"github.com/openimsdk/tools/a2r"
)
type StatisticsApi rpcclient.User
func NewStatisticsApi(client rpcclient.User) StatisticsApi {
return StatisticsApi(client)
}
func (s *StatisticsApi) UserRegister(c *gin.Context) {
a2r.Call(user.UserClient.UserRegisterCount, s.Client, c)
}
+17 -19
View File
@@ -24,27 +24,25 @@ import (
"strings"
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/a2r"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/mcontext"
)
type ThirdApi struct {
GrafanaUrl string
Client third.ThirdClient
}
type ThirdApi rpcclient.Third
func NewThirdApi(client third.ThirdClient, grafanaUrl string) ThirdApi {
return ThirdApi{Client: client, GrafanaUrl: grafanaUrl}
func NewThirdApi(client rpcclient.Third) ThirdApi {
return ThirdApi(client)
}
func (o *ThirdApi) FcmUpdateToken(c *gin.Context) {
a2r.Call(c, third.ThirdClient.FcmUpdateToken, o.Client)
a2r.Call(third.ThirdClient.FcmUpdateToken, o.Client, c)
}
func (o *ThirdApi) SetAppBadge(c *gin.Context) {
a2r.Call(c, third.ThirdClient.SetAppBadge, o.Client)
a2r.Call(third.ThirdClient.SetAppBadge, o.Client, c)
}
// #################### s3 ####################
@@ -79,44 +77,44 @@ func setURLPrefix(c *gin.Context, urlPrefix *string) error {
}
func (o *ThirdApi) PartLimit(c *gin.Context) {
a2r.Call(c, third.ThirdClient.PartLimit, o.Client)
a2r.Call(third.ThirdClient.PartLimit, o.Client, c)
}
func (o *ThirdApi) PartSize(c *gin.Context) {
a2r.Call(c, third.ThirdClient.PartSize, o.Client)
a2r.Call(third.ThirdClient.PartSize, o.Client, c)
}
func (o *ThirdApi) InitiateMultipartUpload(c *gin.Context) {
opt := setURLPrefixOption(third.ThirdClient.InitiateMultipartUpload, func(req *third.InitiateMultipartUploadReq) error {
return setURLPrefix(c, &req.UrlPrefix)
})
a2r.Call(c, third.ThirdClient.InitiateMultipartUpload, o.Client, opt)
a2r.Call(third.ThirdClient.InitiateMultipartUpload, o.Client, c, opt)
}
func (o *ThirdApi) AuthSign(c *gin.Context) {
a2r.Call(c, third.ThirdClient.AuthSign, o.Client)
a2r.Call(third.ThirdClient.AuthSign, o.Client, c)
}
func (o *ThirdApi) CompleteMultipartUpload(c *gin.Context) {
opt := setURLPrefixOption(third.ThirdClient.CompleteMultipartUpload, func(req *third.CompleteMultipartUploadReq) error {
return setURLPrefix(c, &req.UrlPrefix)
})
a2r.Call(c, third.ThirdClient.CompleteMultipartUpload, o.Client, opt)
a2r.Call(third.ThirdClient.CompleteMultipartUpload, o.Client, c, opt)
}
func (o *ThirdApi) AccessURL(c *gin.Context) {
a2r.Call(c, third.ThirdClient.AccessURL, o.Client)
a2r.Call(third.ThirdClient.AccessURL, o.Client, c)
}
func (o *ThirdApi) InitiateFormData(c *gin.Context) {
a2r.Call(c, third.ThirdClient.InitiateFormData, o.Client)
a2r.Call(third.ThirdClient.InitiateFormData, o.Client, c)
}
func (o *ThirdApi) CompleteFormData(c *gin.Context) {
opt := setURLPrefixOption(third.ThirdClient.CompleteFormData, func(req *third.CompleteFormDataReq) error {
return setURLPrefix(c, &req.UrlPrefix)
})
a2r.Call(c, third.ThirdClient.CompleteFormData, o.Client, opt)
a2r.Call(third.ThirdClient.CompleteFormData, o.Client, c, opt)
}
func (o *ThirdApi) ObjectRedirect(c *gin.Context) {
@@ -158,15 +156,15 @@ func (o *ThirdApi) ObjectRedirect(c *gin.Context) {
// #################### logs ####################.
func (o *ThirdApi) UploadLogs(c *gin.Context) {
a2r.Call(c, third.ThirdClient.UploadLogs, o.Client)
a2r.Call(third.ThirdClient.UploadLogs, o.Client, c)
}
func (o *ThirdApi) DeleteLogs(c *gin.Context) {
a2r.Call(c, third.ThirdClient.DeleteLogs, o.Client)
a2r.Call(third.ThirdClient.DeleteLogs, o.Client, c)
}
func (o *ThirdApi) SearchLogs(c *gin.Context) {
a2r.Call(c, third.ThirdClient.SearchLogs, o.Client)
a2r.Call(third.ThirdClient.SearchLogs, o.Client, c)
}
func (o *ThirdApi) GetPrometheus(c *gin.Context) {
+26 -31
View File
@@ -16,57 +16,52 @@ package api
import (
"github.com/gin-gonic/gin"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway"
"github.com/openimsdk/protocol/user"
"github.com/openimsdk/tools/a2r"
"github.com/openimsdk/tools/apiresp"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
)
type UserApi struct {
Client user.UserClient
discov discovery.SvcDiscoveryRegistry
config config.RpcRegisterName
}
type UserApi rpcclient.User
func NewUserApi(client user.UserClient, discov discovery.SvcDiscoveryRegistry, config config.RpcRegisterName) UserApi {
return UserApi{Client: client, discov: discov, config: config}
func NewUserApi(client rpcclient.User) UserApi {
return UserApi(client)
}
func (u *UserApi) UserRegister(c *gin.Context) {
a2r.Call(c, user.UserClient.UserRegister, u.Client)
a2r.Call(user.UserClient.UserRegister, u.Client, c)
}
// UpdateUserInfo is deprecated. Use UpdateUserInfoEx
func (u *UserApi) UpdateUserInfo(c *gin.Context) {
a2r.Call(c, user.UserClient.UpdateUserInfo, u.Client)
a2r.Call(user.UserClient.UpdateUserInfo, u.Client, c)
}
func (u *UserApi) UpdateUserInfoEx(c *gin.Context) {
a2r.Call(c, user.UserClient.UpdateUserInfoEx, u.Client)
a2r.Call(user.UserClient.UpdateUserInfoEx, u.Client, c)
}
func (u *UserApi) SetGlobalRecvMessageOpt(c *gin.Context) {
a2r.Call(c, user.UserClient.SetGlobalRecvMessageOpt, u.Client)
a2r.Call(user.UserClient.SetGlobalRecvMessageOpt, u.Client, c)
}
func (u *UserApi) GetUsersPublicInfo(c *gin.Context) {
a2r.Call(c, user.UserClient.GetDesignateUsers, u.Client)
a2r.Call(user.UserClient.GetDesignateUsers, u.Client, c)
}
func (u *UserApi) GetAllUsersID(c *gin.Context) {
a2r.Call(c, user.UserClient.GetAllUserID, u.Client)
a2r.Call(user.UserClient.GetAllUserID, u.Client, c)
}
func (u *UserApi) AccountCheck(c *gin.Context) {
a2r.Call(c, user.UserClient.AccountCheck, u.Client)
a2r.Call(user.UserClient.AccountCheck, u.Client, c)
}
func (u *UserApi) GetUsers(c *gin.Context) {
a2r.Call(c, user.UserClient.GetPaginationUsers, u.Client)
a2r.Call(user.UserClient.GetPaginationUsers, u.Client, c)
}
// GetUsersOnlineStatus Get user online status.
@@ -76,7 +71,7 @@ func (u *UserApi) GetUsersOnlineStatus(c *gin.Context) {
apiresp.GinError(c, err)
return
}
conns, err := u.discov.GetConns(c, u.config.MessageGateway)
conns, err := u.Discov.GetConns(c, u.MessageGateWayRpcName)
if err != nil {
apiresp.GinError(c, err)
return
@@ -127,7 +122,7 @@ func (u *UserApi) GetUsersOnlineStatus(c *gin.Context) {
}
func (u *UserApi) UserRegisterCount(c *gin.Context) {
a2r.Call(c, user.UserClient.UserRegisterCount, u.Client)
a2r.Call(user.UserClient.UserRegisterCount, u.Client, c)
}
// GetUsersOnlineTokenDetail Get user online token details.
@@ -140,7 +135,7 @@ func (u *UserApi) GetUsersOnlineTokenDetail(c *gin.Context) {
apiresp.GinError(c, errs.ErrArgs.WithDetail(err.Error()).Wrap())
return
}
conns, err := u.discov.GetConns(c, u.config.MessageGateway)
conns, err := u.Discov.GetConns(c, u.MessageGateWayRpcName)
if err != nil {
apiresp.GinError(c, err)
return
@@ -193,52 +188,52 @@ func (u *UserApi) GetUsersOnlineTokenDetail(c *gin.Context) {
// SubscriberStatus Presence status of subscribed users.
func (u *UserApi) SubscriberStatus(c *gin.Context) {
a2r.Call(c, user.UserClient.SubscribeOrCancelUsersStatus, u.Client)
a2r.Call(user.UserClient.SubscribeOrCancelUsersStatus, u.Client, c)
}
// GetUserStatus Get the online status of the user.
func (u *UserApi) GetUserStatus(c *gin.Context) {
a2r.Call(c, user.UserClient.GetUserStatus, u.Client)
a2r.Call(user.UserClient.GetUserStatus, u.Client, c)
}
// GetSubscribeUsersStatus Get the online status of subscribers.
func (u *UserApi) GetSubscribeUsersStatus(c *gin.Context) {
a2r.Call(c, user.UserClient.GetSubscribeUsersStatus, u.Client)
a2r.Call(user.UserClient.GetSubscribeUsersStatus, u.Client, c)
}
// ProcessUserCommandAdd user general function add.
func (u *UserApi) ProcessUserCommandAdd(c *gin.Context) {
a2r.Call(c, user.UserClient.ProcessUserCommandAdd, u.Client)
a2r.Call(user.UserClient.ProcessUserCommandAdd, u.Client, c)
}
// ProcessUserCommandDelete user general function delete.
func (u *UserApi) ProcessUserCommandDelete(c *gin.Context) {
a2r.Call(c, user.UserClient.ProcessUserCommandDelete, u.Client)
a2r.Call(user.UserClient.ProcessUserCommandDelete, u.Client, c)
}
// ProcessUserCommandUpdate user general function update.
func (u *UserApi) ProcessUserCommandUpdate(c *gin.Context) {
a2r.Call(c, user.UserClient.ProcessUserCommandUpdate, u.Client)
a2r.Call(user.UserClient.ProcessUserCommandUpdate, u.Client, c)
}
// ProcessUserCommandGet user general function get.
func (u *UserApi) ProcessUserCommandGet(c *gin.Context) {
a2r.Call(c, user.UserClient.ProcessUserCommandGet, u.Client)
a2r.Call(user.UserClient.ProcessUserCommandGet, u.Client, c)
}
// ProcessUserCommandGet user general function get all.
func (u *UserApi) ProcessUserCommandGetAll(c *gin.Context) {
a2r.Call(c, user.UserClient.ProcessUserCommandGetAll, u.Client)
a2r.Call(user.UserClient.ProcessUserCommandGetAll, u.Client, c)
}
func (u *UserApi) AddNotificationAccount(c *gin.Context) {
a2r.Call(c, user.UserClient.AddNotificationAccount, u.Client)
a2r.Call(user.UserClient.AddNotificationAccount, u.Client, c)
}
func (u *UserApi) UpdateNotificationAccountInfo(c *gin.Context) {
a2r.Call(c, user.UserClient.UpdateNotificationAccountInfo, u.Client)
a2r.Call(user.UserClient.UpdateNotificationAccountInfo, u.Client, c)
}
func (u *UserApi) SearchNotificationAccount(c *gin.Context) {
a2r.Call(c, user.UserClient.SearchNotificationAccount, u.Client)
a2r.Call(user.UserClient.SearchNotificationAccount, u.Client, c)
}
+4 -4
View File
@@ -18,6 +18,8 @@ import (
"context"
"encoding/json"
"fmt"
"github.com/openimsdk/tools/mw"
"runtime/debug"
"sync"
"sync/atomic"
"time"
@@ -131,7 +133,7 @@ func (c *Client) readMessage() {
defer func() {
if r := recover(); r != nil {
c.closedErr = ErrPanic
log.ZPanic(c.ctx, "socket have panic err:", errs.ErrPanic(r))
fmt.Println("socket have panic err:", r, string(debug.Stack()))
}
c.close()
}()
@@ -236,8 +238,6 @@ func (c *Client) handleMessage(message []byte) error {
resp, messageErr = c.longConnServer.GetSeqMessage(ctx, binaryReq)
case WSGetConvMaxReadSeq:
resp, messageErr = c.longConnServer.GetConversationsHasReadAndMaxSeq(ctx, binaryReq)
case WsPullConvLastMessage:
resp, messageErr = c.longConnServer.GetLastMessage(ctx, binaryReq)
case WsLogoutMsg:
resp, messageErr = c.longConnServer.UserLogout(ctx, binaryReq)
case WsSetBackgroundStatus:
@@ -378,7 +378,7 @@ func (c *Client) activeHeartbeat(ctx context.Context) {
go func() {
defer func() {
if r := recover(); r != nil {
log.ZPanic(ctx, "activeHeartbeat Panic", errs.ErrPanic(r))
mw.PanicStackToLog(ctx, r)
}
}()
log.ZDebug(ctx, "server initiative send heartbeat start.")
-1
View File
@@ -47,7 +47,6 @@ const (
WSSendSignalMsg = 1004
WSPullMsg = 1005
WSGetConvMaxReadSeq = 1006
WsPullConvLastMessage = 1007
WSPushMsg = 2001
WSKickOnlineMsg = 2002
WsLogoutMsg = 2003
+15 -17
View File
@@ -18,11 +18,10 @@ import (
"context"
"sync/atomic"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/startrpc"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway"
"github.com/openimsdk/protocol/sdkws"
@@ -36,15 +35,9 @@ import (
)
func (s *Server) InitServer(ctx context.Context, config *Config, disCov discovery.SvcDiscoveryRegistry, server *grpc.Server) error {
userConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
s.userClient = rpcli.NewUserClient(userConn)
if err := s.LongConnServer.SetDiscoveryRegistry(ctx, disCov, config); err != nil {
return err
}
s.LongConnServer.SetDiscoveryRegistry(disCov, config)
msggateway.RegisterMsgGatewayServer(server, s)
s.userRcp = rpcclient.NewUserRpcClient(disCov, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
if s.ready != nil {
return s.ready(s)
}
@@ -54,35 +47,31 @@ func (s *Server) InitServer(ctx context.Context, config *Config, disCov discover
func (s *Server) Start(ctx context.Context, index int, conf *Config) error {
return startrpc.Start(ctx, &conf.Discovery, &conf.MsgGateway.Prometheus, conf.MsgGateway.ListenIP,
conf.MsgGateway.RPC.RegisterIP,
conf.MsgGateway.RPC.AutoSetPorts,
conf.MsgGateway.RPC.Ports, index,
conf.Share.RpcRegisterName.MessageGateway,
&conf.Share,
conf,
[]string{
conf.Share.RpcRegisterName.MessageGateway,
},
s.InitServer,
)
}
type Server struct {
msggateway.UnimplementedMsgGatewayServer
rpcPort int
LongConnServer LongConnServer
config *Config
pushTerminal map[int]struct{}
ready func(srv *Server) error
userRcp rpcclient.UserRpcClient
queue *memamq.MemoryQueue
userClient *rpcli.UserClient
}
func (s *Server) SetLongConnServer(LongConnServer LongConnServer) {
s.LongConnServer = LongConnServer
}
func NewServer(longConnServer LongConnServer, conf *Config, ready func(srv *Server) error) *Server {
func NewServer(rpcPort int, longConnServer LongConnServer, conf *Config, ready func(srv *Server) error) *Server {
s := &Server{
rpcPort: rpcPort,
LongConnServer: longConnServer,
pushTerminal: make(map[int]struct{}),
config: conf,
@@ -94,6 +83,10 @@ func NewServer(longConnServer LongConnServer, conf *Config, ready func(srv *Serv
return s
}
func (s *Server) OnlinePushMsg(context context.Context, req *msggateway.OnlinePushMsgReq) (*msggateway.OnlinePushMsgResp, error) {
panic("implement me")
}
func (s *Server) GetUsersOnlineStatus(ctx context.Context, req *msggateway.GetUsersOnlineStatusReq) (*msggateway.GetUsersOnlineStatusResp, error) {
if !authverify.IsAppManagerUid(ctx, s.config.Share.IMAdminUserID) {
return nil, errs.ErrNoPermission.WrapMsg("only app manager")
@@ -127,6 +120,11 @@ func (s *Server) GetUsersOnlineStatus(ctx context.Context, req *msggateway.GetUs
return &resp, nil
}
func (s *Server) OnlineBatchPushOneMsg(ctx context.Context, req *msggateway.OnlineBatchPushOneMsgReq) (*msggateway.OnlineBatchPushOneMsgResp, error) {
// todo implement
return nil, nil
}
func (s *Server) pushToUser(ctx context.Context, userID string, msgData *sdkws.MsgData) *msggateway.SingleMsgToUserResults {
clients, ok := s.LongConnServer.GetUserAllCons(userID)
if !ok {
+11 -9
View File
@@ -16,13 +16,13 @@ package msggateway
import (
"context"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
"github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/utils/datautil"
"time"
"github.com/openimsdk/tools/log"
)
type Config struct {
@@ -35,13 +35,16 @@ type Config struct {
// Start run ws server.
func Start(ctx context.Context, index int, conf *Config) error {
log.CInfo(ctx, "MSG-GATEWAY server is initializing", "autoSetPorts", conf.MsgGateway.RPC.AutoSetPorts,
"rpcPorts", conf.MsgGateway.RPC.Ports,
log.CInfo(ctx, "MSG-GATEWAY server is initializing", "rpcPorts", conf.MsgGateway.RPC.Ports,
"wsPort", conf.MsgGateway.LongConnSvr.Ports, "prometheusPorts", conf.MsgGateway.Prometheus.Ports)
wsPort, err := datautil.GetElemByIndex(conf.MsgGateway.LongConnSvr.Ports, index)
if err != nil {
return err
}
rpcPort, err := datautil.GetElemByIndex(conf.MsgGateway.RPC.Ports, index)
if err != nil {
return err
}
rdb, err := redisutil.NewRedisClient(ctx, conf.RedisConfig.Build())
if err != nil {
return err
@@ -54,10 +57,9 @@ func Start(ctx context.Context, index int, conf *Config) error {
WithMessageMaxMsgLength(conf.MsgGateway.LongConnSvr.WebsocketMaxMsgLen),
)
hubServer := NewServer(longServer, conf, func(srv *Server) error {
var err error
longServer.online, err = rpccache.NewOnlineCache(srv.userClient, nil, rdb, false, longServer.subscriberUserOnlineStatusChanges)
return err
hubServer := NewServer(rpcPort, longServer, conf, func(srv *Server) error {
longServer.online, _ = rpccache.NewOnlineCache(srv.userRcp, nil, rdb, false, longServer.subscriberUserOnlineStatusChanges)
return nil
})
go longServer.ChangeOnlineStatus(4)
+34 -44
View File
@@ -17,15 +17,17 @@ package msggateway
import (
"context"
"encoding/json"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"sync"
"github.com/go-playground/validator/v10"
"google.golang.org/protobuf/proto"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/push"
"github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/utils/jsonutil"
)
@@ -100,34 +102,34 @@ func (r *Resp) String() string {
}
type MessageHandler interface {
GetSeq(ctx context.Context, data *Req) ([]byte, error)
SendMessage(ctx context.Context, data *Req) ([]byte, error)
SendSignalMessage(ctx context.Context, data *Req) ([]byte, error)
PullMessageBySeqList(ctx context.Context, data *Req) ([]byte, error)
GetConversationsHasReadAndMaxSeq(ctx context.Context, data *Req) ([]byte, error)
GetSeqMessage(ctx context.Context, data *Req) ([]byte, error)
UserLogout(ctx context.Context, data *Req) ([]byte, error)
SetUserDeviceBackground(ctx context.Context, data *Req) ([]byte, bool, error)
GetLastMessage(ctx context.Context, data *Req) ([]byte, error)
GetSeq(context context.Context, data *Req) ([]byte, error)
SendMessage(context context.Context, data *Req) ([]byte, error)
SendSignalMessage(context context.Context, data *Req) ([]byte, error)
PullMessageBySeqList(context context.Context, data *Req) ([]byte, error)
GetConversationsHasReadAndMaxSeq(context context.Context, data *Req) ([]byte, error)
GetSeqMessage(context context.Context, data *Req) ([]byte, error)
UserLogout(context context.Context, data *Req) ([]byte, error)
SetUserDeviceBackground(context context.Context, data *Req) ([]byte, bool, error)
}
var _ MessageHandler = (*GrpcHandler)(nil)
type GrpcHandler struct {
validate *validator.Validate
msgClient *rpcli.MsgClient
pushClient *rpcli.PushMsgServiceClient
msgRpcClient *rpcclient.MessageRpcClient
pushClient *rpcclient.PushRpcClient
validate *validator.Validate
}
func NewGrpcHandler(validate *validator.Validate, msgClient *rpcli.MsgClient, pushClient *rpcli.PushMsgServiceClient) *GrpcHandler {
func NewGrpcHandler(validate *validator.Validate, client discovery.SvcDiscoveryRegistry, rpcRegisterName *config.RpcRegisterName) *GrpcHandler {
msgRpcClient := rpcclient.NewMessageRpcClient(client, rpcRegisterName.Msg)
pushRpcClient := rpcclient.NewPushRpcClient(client, rpcRegisterName.Push)
return &GrpcHandler{
validate: validate,
msgClient: msgClient,
pushClient: pushClient,
msgRpcClient: &msgRpcClient,
pushClient: &pushRpcClient, validate: validate,
}
}
func (g *GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
func (g GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
req := sdkws.GetMaxSeqReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "GetSeq: error unmarshaling request", "action", "unmarshal", "dataType", "GetMaxSeqReq")
@@ -135,7 +137,7 @@ func (g *GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
if err := g.validate.Struct(&req); err != nil {
return nil, errs.WrapMsg(err, "GetSeq: validation failed", "action", "validate", "dataType", "GetMaxSeqReq")
}
resp, err := g.msgClient.MsgClient.GetMaxSeq(ctx, &req)
resp, err := g.msgRpcClient.GetMaxSeq(ctx, &req)
if err != nil {
return nil, err
}
@@ -148,7 +150,7 @@ func (g *GrpcHandler) GetSeq(ctx context.Context, data *Req) ([]byte, error) {
// SendMessage handles the sending of messages through gRPC. It unmarshals the request data,
// validates the message, and then sends it using the message RPC client.
func (g *GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error) {
func (g GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error) {
var msgData sdkws.MsgData
if err := proto.Unmarshal(data.Data, &msgData); err != nil {
return nil, errs.WrapMsg(err, "SendMessage: error unmarshaling message data", "action", "unmarshal", "dataType", "MsgData")
@@ -159,7 +161,7 @@ func (g *GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error
}
req := msg.SendMsgReq{MsgData: &msgData}
resp, err := g.msgClient.MsgClient.SendMsg(ctx, &req)
resp, err := g.msgRpcClient.SendMsg(ctx, &req)
if err != nil {
return nil, err
}
@@ -172,8 +174,8 @@ func (g *GrpcHandler) SendMessage(ctx context.Context, data *Req) ([]byte, error
return c, nil
}
func (g *GrpcHandler) SendSignalMessage(ctx context.Context, data *Req) ([]byte, error) {
resp, err := g.msgClient.MsgClient.SendMsg(ctx, nil)
func (g GrpcHandler) SendSignalMessage(context context.Context, data *Req) ([]byte, error) {
resp, err := g.msgRpcClient.SendMsg(context, nil)
if err != nil {
return nil, err
}
@@ -184,7 +186,7 @@ func (g *GrpcHandler) SendSignalMessage(ctx context.Context, data *Req) ([]byte,
return c, nil
}
func (g *GrpcHandler) PullMessageBySeqList(ctx context.Context, data *Req) ([]byte, error) {
func (g GrpcHandler) PullMessageBySeqList(context context.Context, data *Req) ([]byte, error) {
req := sdkws.PullMessageBySeqsReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "PullMessageBySeqsReq")
@@ -192,7 +194,7 @@ func (g *GrpcHandler) PullMessageBySeqList(ctx context.Context, data *Req) ([]by
if err := g.validate.Struct(data); err != nil {
return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "PullMessageBySeqsReq")
}
resp, err := g.msgClient.MsgClient.PullMessageBySeqs(ctx, &req)
resp, err := g.msgRpcClient.PullMessageBySeqList(context, &req)
if err != nil {
return nil, err
}
@@ -203,7 +205,7 @@ func (g *GrpcHandler) PullMessageBySeqList(ctx context.Context, data *Req) ([]by
return c, nil
}
func (g *GrpcHandler) GetConversationsHasReadAndMaxSeq(ctx context.Context, data *Req) ([]byte, error) {
func (g GrpcHandler) GetConversationsHasReadAndMaxSeq(context context.Context, data *Req) ([]byte, error) {
req := msg.GetConversationsHasReadAndMaxSeqReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "err proto unmarshal", "action", "unmarshal", "dataType", "GetConversationsHasReadAndMaxSeq")
@@ -211,7 +213,7 @@ func (g *GrpcHandler) GetConversationsHasReadAndMaxSeq(ctx context.Context, data
if err := g.validate.Struct(data); err != nil {
return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetConversationsHasReadAndMaxSeq")
}
resp, err := g.msgClient.MsgClient.GetConversationsHasReadAndMaxSeq(ctx, &req)
resp, err := g.msgRpcClient.GetConversationsHasReadAndMaxSeq(context, &req)
if err != nil {
return nil, err
}
@@ -222,7 +224,7 @@ func (g *GrpcHandler) GetConversationsHasReadAndMaxSeq(ctx context.Context, data
return c, nil
}
func (g *GrpcHandler) GetSeqMessage(ctx context.Context, data *Req) ([]byte, error) {
func (g GrpcHandler) GetSeqMessage(context context.Context, data *Req) ([]byte, error) {
req := msg.GetSeqMessageReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "GetSeqMessage")
@@ -230,7 +232,7 @@ func (g *GrpcHandler) GetSeqMessage(ctx context.Context, data *Req) ([]byte, err
if err := g.validate.Struct(data); err != nil {
return nil, errs.WrapMsg(err, "validation failed", "action", "validate", "dataType", "GetSeqMessage")
}
resp, err := g.msgClient.MsgClient.GetSeqMessage(ctx, &req)
resp, err := g.msgRpcClient.GetSeqMessage(context, &req)
if err != nil {
return nil, err
}
@@ -241,12 +243,12 @@ func (g *GrpcHandler) GetSeqMessage(ctx context.Context, data *Req) ([]byte, err
return c, nil
}
func (g *GrpcHandler) UserLogout(ctx context.Context, data *Req) ([]byte, error) {
func (g GrpcHandler) UserLogout(context context.Context, data *Req) ([]byte, error) {
req := push.DelUserPushTokenReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "DelUserPushTokenReq")
}
resp, err := g.pushClient.PushMsgServiceClient.DelUserPushToken(ctx, &req)
resp, err := g.pushClient.DelUserPushToken(context, &req)
if err != nil {
return nil, err
}
@@ -257,7 +259,7 @@ func (g *GrpcHandler) UserLogout(ctx context.Context, data *Req) ([]byte, error)
return c, nil
}
func (g *GrpcHandler) SetUserDeviceBackground(ctx context.Context, data *Req) ([]byte, bool, error) {
func (g GrpcHandler) SetUserDeviceBackground(_ context.Context, data *Req) ([]byte, bool, error) {
req := sdkws.SetAppBackgroundStatusReq{}
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, false, errs.WrapMsg(err, "error unmarshaling request", "action", "unmarshal", "dataType", "SetAppBackgroundStatusReq")
@@ -267,15 +269,3 @@ func (g *GrpcHandler) SetUserDeviceBackground(ctx context.Context, data *Req) ([
}
return nil, req.IsBackground, nil
}
func (g *GrpcHandler) GetLastMessage(ctx context.Context, data *Req) ([]byte, error) {
var req msg.GetLastMessageReq
if err := proto.Unmarshal(data.Data, &req); err != nil {
return nil, err
}
resp, err := g.msgClient.GetLastMessage(ctx, &req)
if err != nil {
return nil, err
}
return proto.Marshal(resp)
}
+1 -1
View File
@@ -87,7 +87,7 @@ func (ws *WsServer) ChangeOnlineStatus(concurrent int) {
opIdCtx := mcontext.SetOperationID(context.Background(), operationIDPrefix+strconv.FormatInt(count.Add(1), 10))
ctx, cancel := context.WithTimeout(opIdCtx, time.Second*5)
defer cancel()
if err := ws.userClient.SetUserOnlineStatus(ctx, req); err != nil {
if _, err := ws.userClient.Client.SetUserOnlineStatus(ctx, req); err != nil {
log.ZError(ctx, "update user online status", err)
}
for _, ss := range req.Status {
+15 -36
View File
@@ -3,7 +3,10 @@ package msggateway
import (
"context"
"fmt"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
pbAuth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/tools/mcontext"
"net/http"
"sync"
"sync/atomic"
@@ -12,15 +15,12 @@ import (
"github.com/go-playground/validator/v10"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
pbAuth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/utils/stringutil"
"golang.org/x/sync/errgroup"
)
@@ -31,7 +31,7 @@ type LongConnServer interface {
GetUserAllCons(userID string) ([]*Client, bool)
GetUserPlatformCons(userID string, platform int) ([]*Client, bool, bool)
Validate(s any) error
SetDiscoveryRegistry(ctx context.Context, client discovery.SvcDiscoveryRegistry, config *Config) error
SetDiscoveryRegistry(client discovery.SvcDiscoveryRegistry, config *Config)
KickUserConn(client *Client) error
UnRegister(c *Client)
SetKickHandlerInfo(i *kickHandler)
@@ -56,13 +56,13 @@ type WsServer struct {
handshakeTimeout time.Duration
writeBufferSize int
validate *validator.Validate
userClient *rpcclient.UserRpcClient
authClient *rpcclient.Auth
disCov discovery.SvcDiscoveryRegistry
Compressor
//Encoder
MessageHandler
webhookClient *webhook.Client
userClient *rpcli.UserClient
authClient *rpcli.AuthClient
}
type kickHandler struct {
@@ -71,28 +71,12 @@ type kickHandler struct {
newClient *Client
}
func (ws *WsServer) SetDiscoveryRegistry(ctx context.Context, disCov discovery.SvcDiscoveryRegistry, config *Config) error {
userConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
pushConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.Push)
if err != nil {
return err
}
authConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.Auth)
if err != nil {
return err
}
msgConn, err := disCov.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
ws.userClient = rpcli.NewUserClient(userConn)
ws.authClient = rpcli.NewAuthClient(authConn)
ws.MessageHandler = NewGrpcHandler(ws.validate, rpcli.NewMsgClient(msgConn), rpcli.NewPushMsgServiceClient(pushConn))
func (ws *WsServer) SetDiscoveryRegistry(disCov discovery.SvcDiscoveryRegistry, config *Config) {
ws.MessageHandler = NewGrpcHandler(ws.validate, disCov, &config.Share.RpcRegisterName)
u := rpcclient.NewUserRpcClient(disCov, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
ws.authClient = rpcclient.NewAuth(disCov, config.Share.RpcRegisterName.Auth)
ws.userClient = &u
ws.disCov = disCov
return nil
}
//func (ws *WsServer) SetUserOnlineStatus(ctx context.Context, client *Client, status int32) {
@@ -327,7 +311,7 @@ func (ws *WsServer) multiTerminalLoginChecker(clientOK bool, oldClients []*Clien
[]string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(),
constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()},
)
if err := ws.authClient.KickTokens(ctx, kickTokens); err != nil {
if _, err := ws.authClient.KickTokens(ctx, kickTokens); err != nil {
log.ZWarn(newClient.ctx, "kickTokens err", err)
}
}
@@ -354,12 +338,7 @@ func (ws *WsServer) multiTerminalLoginChecker(clientOK bool, oldClients []*Clien
[]string{newClient.ctx.GetOperationID(), newClient.ctx.GetUserID(),
constant.PlatformIDToName(newClient.PlatformID), newClient.ctx.GetConnID()},
)
req := &pbAuth.InvalidateTokenReq{
PreservedToken: newClient.token,
UserID: newClient.UserID,
PlatformID: int32(newClient.PlatformID),
}
if err := ws.authClient.InvalidateToken(ctx, req); err != nil {
if _, err := ws.authClient.InvalidateToken(ctx, newClient.token, newClient.UserID, newClient.PlatformID); err != nil {
log.ZWarn(newClient.ctx, "InvalidateToken err", err, "userID", newClient.UserID,
"platformID", newClient.PlatformID)
}
+25 -76
View File
@@ -18,17 +18,11 @@ import (
"context"
"errors"
"fmt"
"net"
"net/http"
"os"
"os/signal"
"strconv"
"syscall"
"github.com/openimsdk/tools/discovery/etcd"
"github.com/openimsdk/tools/utils/jsonutil"
"github.com/openimsdk/tools/utils/network"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo"
@@ -36,10 +30,10 @@ import (
"github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/tools/utils/datautil"
conf "github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
discRegister "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mw"
@@ -60,17 +54,16 @@ type MsgTransfer struct {
}
type Config struct {
MsgTransfer conf.MsgTransfer
RedisConfig conf.Redis
MongodbConfig conf.Mongo
KafkaConfig conf.Kafka
Share conf.Share
WebhooksConfig conf.Webhooks
Discovery conf.Discovery
MsgTransfer config.MsgTransfer
RedisConfig config.Redis
MongodbConfig config.Mongo
KafkaConfig config.Kafka
Share config.Share
WebhooksConfig config.Webhooks
Discovery config.Discovery
}
func Start(ctx context.Context, index int, config *Config) error {
log.CInfo(ctx, "MSG-TRANSFER server is initializing", "prometheusPorts",
config.MsgTransfer.Prometheus.Ports, "index", index)
@@ -82,18 +75,18 @@ func Start(ctx context.Context, index int, config *Config) error {
if err != nil {
return err
}
client, err := discRegister.NewDiscoveryRegister(&config.Discovery, &config.Share, nil)
client, err := discRegister.NewDiscoveryRegister(&config.Discovery, &config.Share)
if err != nil {
return err
}
client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()),
grpc.WithDefaultServiceConfig(fmt.Sprintf(`{"LoadBalancingPolicy": "%s"}`, "round_robin")))
msgModel := redis.NewMsgCache(rdb)
msgDocModel, err := mgo.NewMsgMongo(mgocli.GetDB())
if err != nil {
return err
}
msgModel := redis.NewMsgCache(rdb, msgDocModel)
seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
if err != nil {
return err
@@ -108,7 +101,9 @@ func Start(ctx context.Context, index int, config *Config) error {
if err != nil {
return err
}
historyCH, err := NewOnlineHistoryRedisConsumerHandler(ctx, client, config, msgTransferDatabase)
conversationRpcClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
historyCH, err := NewOnlineHistoryRedisConsumerHandler(&config.KafkaConfig, msgTransferDatabase, &conversationRpcClient, &groupRpcClient)
if err != nil {
return err
}
@@ -124,7 +119,7 @@ func Start(ctx context.Context, index int, config *Config) error {
return msgTransfer.Start(index, config)
}
func (m *MsgTransfer) Start(index int, cfg *Config) error {
func (m *MsgTransfer) Start(index int, config *Config) error {
m.ctx, m.cancel = context.WithCancel(context.Background())
var (
netDone = make(chan struct{}, 1)
@@ -139,67 +134,21 @@ func (m *MsgTransfer) Start(index int, cfg *Config) error {
return err
}
client, err := kdisc.NewDiscoveryRegister(&cfg.Discovery, &cfg.Share, nil)
if err != nil {
return errs.WrapMsg(err, "failed to register discovery service")
}
registerIP, err := network.GetRpcRegisterIP("")
if err != nil {
return err
}
getAutoPort := func() (net.Listener, int, error) {
registerAddr := net.JoinHostPort(registerIP, "0")
listener, err := net.Listen("tcp", registerAddr)
if err != nil {
return nil, 0, errs.WrapMsg(err, "listen err", "registerAddr", registerAddr)
}
_, portStr, _ := net.SplitHostPort(listener.Addr().String())
port, _ := strconv.Atoi(portStr)
return listener, port, nil
}
if cfg.MsgTransfer.Prometheus.AutoSetPorts && cfg.Discovery.Enable != conf.ETCD {
return errs.New("only etcd support autoSetPorts", "RegisterName", "api").Wrap()
}
if cfg.MsgTransfer.Prometheus.Enable {
var (
listener net.Listener
prometheusPort int
)
if cfg.MsgTransfer.Prometheus.AutoSetPorts {
listener, prometheusPort, err = getAutoPort()
if err != nil {
return err
}
etcdClient := client.(*etcd.SvcDiscoveryRegistryImpl).GetClient()
_, err = etcdClient.Put(context.TODO(), prommetrics.BuildDiscoveryKey(prommetrics.MessageTransferKeyName), jsonutil.StructToJsonString(prommetrics.BuildDefaultTarget(registerIP, prometheusPort)))
if err != nil {
return errs.WrapMsg(err, "etcd put err")
}
} else {
prometheusPort, err = datautil.GetElemByIndex(cfg.MsgTransfer.Prometheus.Ports, index)
if err != nil {
return err
}
listener, err = net.Listen("tcp", fmt.Sprintf(":%d", prometheusPort))
if err != nil {
return errs.WrapMsg(err, "listen err", "addr", fmt.Sprintf(":%d", prometheusPort))
}
}
if config.MsgTransfer.Prometheus.Enable {
go func() {
defer func() {
if r := recover(); r != nil {
log.ZPanic(m.ctx, "MsgTransfer Start Panic", errs.ErrPanic(r))
mw.PanicStackToLog(m.ctx, r)
}
}()
if err := prommetrics.TransferInit(listener); err != nil && !errors.Is(err, http.ErrServerClosed) {
prometheusPort, err := datautil.GetElemByIndex(config.MsgTransfer.Prometheus.Ports, index)
if err != nil {
netErr = err
netDone <- struct{}{}
return
}
if err := prommetrics.TransferInit(prometheusPort); err != nil && !errors.Is(err, http.ErrServerClosed) {
netErr = errs.WrapMsg(err, "prometheus start error", "prometheusPort", prometheusPort)
netDone <- struct{}{}
}
@@ -18,22 +18,21 @@ import (
"context"
"encoding/json"
"errors"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/tools/discovery"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/tools/mw"
"strconv"
"strings"
"sync"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/IBM/sarama"
"github.com/go-redis/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/tools/batcher"
"github.com/openimsdk/protocol/constant"
pbconv "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
@@ -70,32 +69,21 @@ type OnlineHistoryRedisConsumerHandler struct {
redisMessageBatches *batcher.Batcher[sarama.ConsumerMessage]
msgTransferDatabase controller.MsgTransferDatabase
conversationRpcClient *rpcclient.ConversationRpcClient
groupRpcClient *rpcclient.GroupRpcClient
conversationUserHasReadChan chan *userHasReadSeq
wg sync.WaitGroup
groupClient *rpcli.GroupClient
conversationClient *rpcli.ConversationClient
}
func NewOnlineHistoryRedisConsumerHandler(ctx context.Context, client discovery.SvcDiscoveryRegistry, config *Config, database controller.MsgTransferDatabase) (*OnlineHistoryRedisConsumerHandler, error) {
kafkaConf := config.KafkaConfig
func NewOnlineHistoryRedisConsumerHandler(kafkaConf *config.Kafka, database controller.MsgTransferDatabase,
conversationRpcClient *rpcclient.ConversationRpcClient, groupRpcClient *rpcclient.GroupRpcClient) (*OnlineHistoryRedisConsumerHandler, error) {
historyConsumerGroup, err := kafka.NewMConsumerGroup(kafkaConf.Build(), kafkaConf.ToRedisGroupID, []string{kafkaConf.ToRedisTopic}, false)
if err != nil {
return nil, err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return nil, err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return nil, err
}
var och OnlineHistoryRedisConsumerHandler
och.msgTransferDatabase = database
och.conversationUserHasReadChan = make(chan *userHasReadSeq, hasReadChanBuffer)
och.groupClient = rpcli.NewGroupClient(groupConn)
och.conversationClient = rpcli.NewConversationClient(conversationConn)
och.wg.Add(1)
b := batcher.New[sarama.ConsumerMessage](
@@ -115,21 +103,25 @@ func NewOnlineHistoryRedisConsumerHandler(ctx context.Context, client discovery.
}
b.Do = och.do
och.redisMessageBatches = b
och.conversationRpcClient = conversationRpcClient
och.groupRpcClient = groupRpcClient
och.historyConsumerGroup = historyConsumerGroup
return &och, nil
return &och, err
}
func (och *OnlineHistoryRedisConsumerHandler) do(ctx context.Context, channelID int, val *batcher.Msg[sarama.ConsumerMessage]) {
ctx = mcontext.WithTriggerIDContext(ctx, val.TriggerID())
ctxMessages := och.parseConsumerMessages(ctx, val.Val())
ctx = withAggregationCtx(ctx, ctxMessages)
log.ZInfo(ctx, "msg arrived channel", "channel id", channelID, "msgList length", len(ctxMessages), "key", val.Key())
log.ZInfo(ctx, "msg arrived channel", "channel id", channelID, "msgList length", len(ctxMessages),
"key", val.Key())
och.doSetReadSeq(ctx, ctxMessages)
storageMsgList, notStorageMsgList, storageNotificationList, notStorageNotificationList :=
och.categorizeMessageLists(ctxMessages)
log.ZDebug(ctx, "number of categorized messages", "storageMsgList", len(storageMsgList), "notStorageMsgList",
len(notStorageMsgList), "storageNotificationList", len(storageNotificationList), "notStorageNotificationList", len(notStorageNotificationList))
len(notStorageMsgList), "storageNotificationList", len(storageNotificationList), "notStorageNotificationList",
len(notStorageNotificationList))
conversationIDMsg := msgprocessor.GetChatConversationIDByMsg(ctxMessages[0].message)
conversationIDNotification := msgprocessor.GetNotificationConversationIDByMsg(ctxMessages[0].message)
@@ -293,27 +285,22 @@ func (och *OnlineHistoryRedisConsumerHandler) handleMsg(ctx context.Context, key
case constant.ReadGroupChatType:
log.ZDebug(ctx, "group chat first create conversation", "conversationID",
conversationID)
userIDs, err := och.groupClient.GetGroupMemberUserIDs(ctx, msg.GroupID)
userIDs, err := och.groupRpcClient.GetGroupMemberIDs(ctx, msg.GroupID)
if err != nil {
log.ZWarn(ctx, "get group member ids error", err, "conversationID",
conversationID)
} else {
log.ZInfo(ctx, "GetGroupMemberIDs end")
if err := och.conversationClient.CreateGroupChatConversations(ctx, msg.GroupID, userIDs); err != nil {
if err := och.conversationRpcClient.GroupChatFirstCreateConversation(ctx,
msg.GroupID, userIDs); err != nil {
log.ZWarn(ctx, "single chat first create conversation error", err,
"conversationID", conversationID)
}
}
case constant.SingleChatType, constant.NotificationChatType:
req := &pbconv.CreateSingleChatConversationsReq{
RecvID: msg.RecvID,
SendID: msg.SendID,
ConversationID: conversationID,
ConversationType: msg.SessionType,
}
if err := och.conversationClient.CreateSingleChatConversations(ctx, req); err != nil {
if err := och.conversationRpcClient.SingleChatFirstCreateConversation(ctx, msg.RecvID,
msg.SendID, conversationID, msg.SessionType); err != nil {
log.ZWarn(ctx, "single chat or notification first create conversation error", err,
"conversationID", conversationID, "sessionType", msg.SessionType)
}
@@ -362,7 +349,7 @@ func (och *OnlineHistoryRedisConsumerHandler) handleNotification(ctx context.Con
func (och *OnlineHistoryRedisConsumerHandler) HandleUserHasReadSeqMessages(ctx context.Context) {
defer func() {
if r := recover(); r != nil {
log.ZPanic(ctx, "HandleUserHasReadSeqMessages Panic", errs.ErrPanic(r))
mw.PanicStackToLog(ctx, r)
}
}()
@@ -77,13 +77,27 @@ func (mc *OnlineHistoryMongoConsumerHandler) handleChatWs2Mongo(ctx context.Cont
for _, msg := range msgFromMQ.MsgData {
seqs = append(seqs, msg.Seq)
}
err = mc.msgTransferDatabase.DeleteMessagesFromCache(ctx, msgFromMQ.ConversationID, seqs)
if err != nil {
log.ZError(
ctx,
"remove cache msg from redis err",
err,
"msg",
msgFromMQ.MsgData,
"conversationID",
msgFromMQ.ConversationID,
)
}
}
func (*OnlineHistoryMongoConsumerHandler) Setup(_ sarama.ConsumerGroupSession) error { return nil }
func (*OnlineHistoryMongoConsumerHandler) Setup(_ sarama.ConsumerGroupSession) error { return nil }
func (*OnlineHistoryMongoConsumerHandler) Cleanup(_ sarama.ConsumerGroupSession) error { return nil }
func (mc *OnlineHistoryMongoConsumerHandler) ConsumeClaim(sess sarama.ConsumerGroupSession, claim sarama.ConsumerGroupClaim) error { // an instance in the consumer group
func (mc *OnlineHistoryMongoConsumerHandler) ConsumeClaim(
sess sarama.ConsumerGroupSession,
claim sarama.ConsumerGroupClaim,
) error { // an instance in the consumer group
log.ZDebug(context.Background(), "online new session msg come", "highWaterMarkOffset",
claim.HighWaterMarkOffset(), "topic", claim.Topic(), "partition", claim.Partition())
for msg := range claim.Messages() {
+2 -5
View File
@@ -18,7 +18,6 @@ import (
"context"
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options"
"github.com/openimsdk/tools/log"
"sync/atomic"
)
func NewClient() *Dummy {
@@ -26,12 +25,10 @@ func NewClient() *Dummy {
}
type Dummy struct {
v atomic.Bool
}
func (d *Dummy) Push(ctx context.Context, userIDs []string, title, content string, opts *options.Opts) error {
if d.v.CompareAndSwap(false, true) {
log.ZWarn(ctx, "dummy push", nil, "ps", "the offline push is not configured. to configure it, please go to config/openim-push.yml")
}
log.ZDebug(ctx, "dummy push")
log.ZWarn(ctx, "Dummy push", nil, "ps", "The offline push is not configured. To configure it, please go to config/openim-push.yml.")
return nil
}
+5 -3
View File
@@ -14,7 +14,6 @@ import (
)
type pushServer struct {
pbpush.UnimplementedPushMsgServiceServer
database controller.PushDatabase
disCov discovery.SvcDiscoveryRegistry
offlinePusher offlinepush.OfflinePusher
@@ -32,8 +31,11 @@ type Config struct {
LocalCacheConfig config.LocalCache
Discovery config.Discovery
FcmConfigPath string
}
runTimeEnv string
func (p pushServer) PushMsg(ctx context.Context, req *pbpush.PushMsgReq) (*pbpush.PushMsgResp, error) {
//todo reserved Interface
return nil, nil
}
func (p pushServer) DelUserPushToken(ctx context.Context,
@@ -57,7 +59,7 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
database := controller.NewPushDatabase(cacheModel, &config.KafkaConfig)
consumer, err := NewConsumerHandler(ctx, config, database, offlinePusher, rdb, client)
consumer, err := NewConsumerHandler(config, database, offlinePusher, rdb, client)
if err != nil {
return err
}
+34 -46
View File
@@ -3,12 +3,11 @@ package push
import (
"context"
"encoding/json"
"math/rand"
"strconv"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/IBM/sarama"
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush"
"github.com/openimsdk/open-im-server/v3/internal/push/offlinepush/options"
@@ -17,6 +16,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/util/conversationutil"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway"
@@ -41,15 +41,14 @@ type ConsumerHandler struct {
onlineCache *rpccache.OnlineCache
groupLocalCache *rpccache.GroupLocalCache
conversationLocalCache *rpccache.ConversationLocalCache
msgRpcClient rpcclient.MessageRpcClient
conversationRpcClient rpcclient.ConversationRpcClient
groupRpcClient rpcclient.GroupRpcClient
webhookClient *webhook.Client
config *Config
userClient *rpcli.UserClient
groupClient *rpcli.GroupClient
msgClient *rpcli.MsgClient
conversationClient *rpcli.ConversationClient
}
func NewConsumerHandler(ctx context.Context, config *Config, database controller.PushDatabase, offlinePusher offlinepush.OfflinePusher, rdb redis.UniversalClient,
func NewConsumerHandler(config *Config, database controller.PushDatabase, offlinePusher offlinepush.OfflinePusher, rdb redis.UniversalClient,
client discovery.SvcDiscoveryRegistry) (*ConsumerHandler, error) {
var consumerHandler ConsumerHandler
var err error
@@ -58,35 +57,20 @@ func NewConsumerHandler(ctx context.Context, config *Config, database controller
if err != nil {
return nil, err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return nil, err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return nil, err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return nil, err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return nil, err
}
consumerHandler.userClient = rpcli.NewUserClient(userConn)
consumerHandler.groupClient = rpcli.NewGroupClient(groupConn)
consumerHandler.msgClient = rpcli.NewMsgClient(msgConn)
consumerHandler.conversationClient = rpcli.NewConversationClient(conversationConn)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
consumerHandler.offlinePusher = offlinePusher
consumerHandler.onlinePusher = NewOnlinePusher(client, config)
consumerHandler.groupLocalCache = rpccache.NewGroupLocalCache(consumerHandler.groupClient, &config.LocalCacheConfig, rdb)
consumerHandler.conversationLocalCache = rpccache.NewConversationLocalCache(consumerHandler.conversationClient, &config.LocalCacheConfig, rdb)
consumerHandler.groupRpcClient = rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
consumerHandler.groupLocalCache = rpccache.NewGroupLocalCache(consumerHandler.groupRpcClient, &config.LocalCacheConfig, rdb)
consumerHandler.msgRpcClient = rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
consumerHandler.conversationRpcClient = rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
consumerHandler.conversationLocalCache = rpccache.NewConversationLocalCache(consumerHandler.conversationRpcClient, &config.LocalCacheConfig, rdb)
consumerHandler.webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
consumerHandler.config = config
consumerHandler.pushDatabase = database
consumerHandler.onlineCache, err = rpccache.NewOnlineCache(consumerHandler.userClient, consumerHandler.groupLocalCache, rdb, config.RpcConfig.FullUserCache, nil)
consumerHandler.onlineCache, err = rpccache.NewOnlineCache(userRpcClient, consumerHandler.groupLocalCache, rdb, config.RpcConfig.FullUserCache, nil)
if err != nil {
return nil, err
}
@@ -153,24 +137,24 @@ func (c *ConsumerHandler) Push2User(ctx context.Context, userIDs []string, msg *
log.ZInfo(ctx, "Get msg from msg_transfer And push msg", "userIDs", userIDs, "msg", msg.String())
defer func(duration time.Time) {
t := time.Since(duration)
log.ZInfo(ctx, "Get msg from msg_transfer And push msg end", "msg", msg.String(), "time cost", t)
log.ZInfo(ctx, "Get msg from msg_transfer And push msg", "msg", msg.String(), "time cost", t)
}(time.Now())
if err := c.webhookBeforeOnlinePush(ctx, &c.config.WebhooksConfig.BeforeOnlinePush, userIDs, msg); err != nil {
return err
}
log.ZInfo(ctx, "webhookBeforeOnlinePush end")
wsResults, err := c.GetConnsAndOnlinePush(ctx, msg, userIDs)
if err != nil {
return err
}
log.ZDebug(ctx, "single and notification push result", "result", wsResults, "msg", msg, "push_to_userID", userIDs)
log.ZInfo(ctx, "single and notification push end")
log.ZInfo(ctx, "single and notification push result", "result", wsResults, "msg", msg, "push_to_userID", userIDs)
if !c.shouldPushOffline(ctx, msg) {
return nil
}
log.ZInfo(ctx, "pushOffline start")
log.ZInfo(ctx, "shouldPushOffline end")
for _, v := range wsResults {
//message sender do not need offline push
@@ -189,14 +173,14 @@ func (c *ConsumerHandler) Push2User(ctx context.Context, userIDs []string, msg *
if err = c.webhookBeforeOfflinePush(ctx, &c.config.WebhooksConfig.BeforeOfflinePush, needOfflinePushUserID, msg, &offlinePushUserID); err != nil {
return err
}
log.ZInfo(ctx, "webhookBeforeOfflinePush end")
if len(offlinePushUserID) > 0 {
needOfflinePushUserID = offlinePushUserID
}
err = c.offlinePushMsg(ctx, msg, needOfflinePushUserID)
if err != nil {
log.ZDebug(ctx, "offlinePushMsg failed", err, "needOfflinePushUserID", needOfflinePushUserID, "msg", msg)
log.ZWarn(ctx, "offlinePushMsg failed", err, "needOfflinePushUserID length", len(needOfflinePushUserID), "msg", msg)
log.ZWarn(ctx, "offlinePushMsg failed", err, "needOfflinePushUserID", needOfflinePushUserID, "msg", msg)
return nil
}
@@ -251,24 +235,26 @@ func (c *ConsumerHandler) Push2Group(ctx context.Context, groupID string, msg *s
&pushToUserIDs); err != nil {
return err
}
log.ZInfo(ctx, "webhookBeforeGroupOnlinePush end")
err = c.groupMessagesHandler(ctx, groupID, &pushToUserIDs, msg)
if err != nil {
return err
}
log.ZInfo(ctx, "groupMessagesHandler end")
wsResults, err := c.GetConnsAndOnlinePush(ctx, msg, pushToUserIDs)
if err != nil {
return err
}
log.ZDebug(ctx, "group push result", "result", wsResults, "msg", msg)
log.ZInfo(ctx, "online group push end")
log.ZInfo(ctx, "group push result", "result", wsResults, "msg", msg)
if !c.shouldPushOffline(ctx, msg) {
return nil
}
needOfflinePushUserIDs := c.onlinePusher.GetOnlinePushFailedUserIDs(ctx, msg, wsResults, &pushToUserIDs)
log.ZInfo(ctx, "GetOnlinePushFailedUserIDs end")
//filter some user, like don not disturb or don't need offline push etc.
needOfflinePushUserIDs, err = c.filterGroupMessageOfflinePush(ctx, groupID, msg, needOfflinePushUserIDs)
if err != nil {
@@ -296,11 +282,9 @@ func (c *ConsumerHandler) asyncOfflinePush(ctx context.Context, needOfflinePushU
needOfflinePushUserIDs = offlinePushUserIDs
}
if err := c.pushDatabase.MsgToOfflinePushMQ(ctx, conversationutil.GenConversationUniqueKeyForSingle(msg.SendID, msg.RecvID), needOfflinePushUserIDs, msg); err != nil {
log.ZDebug(ctx, "Msg To OfflinePush MQ error", err, "needOfflinePushUserIDs",
log.ZError(ctx, "Msg To OfflinePush MQ error", err, "needOfflinePushUserIDs",
needOfflinePushUserIDs, "msg", msg)
log.ZWarn(ctx, "Msg To OfflinePush MQ error", err, "needOfflinePushUserIDs length",
len(needOfflinePushUserIDs), "msg", msg)
prommetrics.GroupChatMsgProcessFailedCounter.Inc()
prommetrics.SingleChatMsgProcessFailedCounter.Inc()
return
}
}
@@ -343,7 +327,7 @@ func (c *ConsumerHandler) groupMessagesHandler(ctx context.Context, groupID stri
ctx = mcontext.WithOpUserIDContext(ctx, c.config.Share.IMAdminUserID[0])
}
defer func(groupID string) {
if err := c.groupClient.DismissGroup(ctx, groupID, true); err != nil {
if err = c.groupRpcClient.DismissGroup(ctx, groupID); err != nil {
log.ZError(ctx, "DismissGroup Notification clear members", err, "groupID", groupID)
}
}(groupID)
@@ -369,7 +353,10 @@ func (c *ConsumerHandler) offlinePushMsg(ctx context.Context, msg *sdkws.MsgData
func (c *ConsumerHandler) filterGroupMessageOfflinePush(ctx context.Context, groupID string, msg *sdkws.MsgData,
offlinePushUserIDs []string) (userIDs []string, err error) {
needOfflinePushUserIDs, err := c.conversationClient.GetConversationOfflinePushUserIDs(ctx, conversationutil.GenGroupConversationID(groupID), offlinePushUserIDs)
//todo local cache Obtain the difference set through local comparison.
needOfflinePushUserIDs, err := c.conversationRpcClient.GetConversationOfflinePushUserIDs(
ctx, conversationutil.GenGroupConversationID(groupID), offlinePushUserIDs)
if err != nil {
return nil, err
}
@@ -423,11 +410,11 @@ func (c *ConsumerHandler) getOfflinePushInfos(msg *sdkws.MsgData) (title, conten
func (c *ConsumerHandler) DeleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := c.msgClient.GetConversationMaxSeq(ctx, conversationID)
maxSeq, err := c.msgRpcClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil {
return err
}
return c.conversationClient.SetConversationMaxSeq(ctx, conversationID, userIDs, maxSeq)
return c.conversationRpcClient.SetConversationMaxSeq(ctx, userIDs, conversationID, maxSeq)
}
func unmarshalNotificationElem(bytes []byte, t any) error {
@@ -435,5 +422,6 @@ func unmarshalNotificationElem(bytes []byte, t any) error {
if err := json.Unmarshal(bytes, &notification); err != nil {
return err
}
return json.Unmarshal([]byte(notification.Detail), t)
}
+14 -20
View File
@@ -18,8 +18,6 @@ import (
"context"
"errors"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
redis2 "github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/tools/db/redisutil"
@@ -30,6 +28,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
pbauth "github.com/openimsdk/protocol/auth"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msggateway"
@@ -41,11 +40,10 @@ import (
)
type authServer struct {
pbauth.UnimplementedAuthServer
authDatabase controller.AuthDatabase
userRpcClient *rpcclient.UserRpcClient
RegisterCenter discovery.SvcDiscoveryRegistry
config *Config
userClient *rpcli.UserClient
}
type Config struct {
@@ -60,11 +58,9 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil {
return err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
pbauth.RegisterAuthServer(server, &authServer{
userRpcClient: &userRpcClient,
RegisterCenter: client,
authDatabase: controller.NewAuthDatabase(
redis2.NewTokenCacheModel(rdb, config.RpcConfig.TokenPolicy.Expire),
@@ -73,8 +69,7 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
config.Share.MultiLogin,
config.Share.IMAdminUserID,
),
config: config,
userClient: rpcli.NewUserClient(userConn),
config: config,
})
return nil
}
@@ -90,7 +85,7 @@ func (s *authServer) GetAdminToken(ctx context.Context, req *pbauth.GetAdminToke
}
if err := s.userClient.CheckUser(ctx, []string{req.UserID}); err != nil {
if _, err := s.userRpcClient.GetUserInfo(ctx, req.UserID); err != nil {
return nil, err
}
@@ -119,13 +114,9 @@ func (s *authServer) GetUserToken(ctx context.Context, req *pbauth.GetUserTokenR
if authverify.IsManagerUserID(req.UserID, s.config.Share.IMAdminUserID) {
return nil, errs.ErrNoPermission.WrapMsg("don't get Admin token")
}
user, err := s.userClient.GetUserInfo(ctx, req.UserID)
if err != nil {
if _, err := s.userRpcClient.GetUserInfo(ctx, req.UserID); err != nil {
return nil, err
}
if user.AppMangerLevel >= constant.AppNotificationAdmin {
return nil, errs.ErrArgs.WrapMsg("app account can`t get token")
}
token, err := s.authDatabase.CreateToken(ctx, req.UserID, int(req.PlatformID))
if err != nil {
return nil, err
@@ -138,7 +129,7 @@ func (s *authServer) GetUserToken(ctx context.Context, req *pbauth.GetUserTokenR
func (s *authServer) parseToken(ctx context.Context, tokensString string) (claims *tokenverify.Claims, err error) {
claims, err = tokenverify.GetClaimFromToken(tokensString, authverify.Secret(s.config.Share.Secret))
if err != nil {
return nil, err
return nil, errs.Wrap(err)
}
isAdmin := authverify.IsManagerUserID(claims.UserID, s.config.Share.IMAdminUserID)
if isAdmin {
@@ -164,7 +155,10 @@ func (s *authServer) parseToken(ctx context.Context, tokensString string) (claim
return nil, servererrs.ErrTokenNotExist.Wrap()
}
func (s *authServer) ParseToken(ctx context.Context, req *pbauth.ParseTokenReq) (resp *pbauth.ParseTokenResp, err error) {
func (s *authServer) ParseToken(
ctx context.Context,
req *pbauth.ParseTokenReq,
) (resp *pbauth.ParseTokenResp, err error) {
resp = &pbauth.ParseTokenResp{}
claims, err := s.parseToken(ctx, req.Token)
if err != nil {
@@ -202,7 +196,7 @@ func (s *authServer) forceKickOff(ctx context.Context, userID string, platformID
}
m, err := s.authDatabase.GetTokensWithoutError(ctx, userID, int(platformID))
if err != nil && !errors.Is(err, redis.Nil) {
if err != nil && errors.Is(err, redis.Nil) {
return err
}
for k := range m {
@@ -220,7 +214,7 @@ func (s *authServer) forceKickOff(ctx context.Context, userID string, platformID
func (s *authServer) InvalidateToken(ctx context.Context, req *pbauth.InvalidateTokenReq) (*pbauth.InvalidateTokenResp, error) {
m, err := s.authDatabase.GetTokensWithoutError(ctx, req.UserID, int(req.PlatformID))
if err != nil && !errors.Is(err, redis.Nil) {
if err != nil && errors.Is(err, redis.Nil) {
return nil, err
}
if m == nil {
+31 -106
View File
@@ -19,20 +19,18 @@ import (
"sort"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
dbModel "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/localcache"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/sdkws"
@@ -45,15 +43,13 @@ import (
)
type conversationServer struct {
pbconversation.UnimplementedConversationServer
msgRpcClient *rpcclient.MessageRpcClient
user *rpcclient.UserRpcClient
groupRpcClient *rpcclient.GroupRpcClient
conversationDatabase controller.ConversationDatabase
conversationNotificationSender *ConversationNotificationSender
config *Config
userClient *rpcli.UserClient
msgClient *rpcli.MsgClient
groupClient *rpcli.GroupClient
}
type Config struct {
@@ -79,27 +75,17 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil {
return err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
msgClient := rpcli.NewMsgClient(msgConn)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
localcache.InitLocalCache(&config.LocalCacheConfig)
pbconversation.RegisterConversationServer(server, &conversationServer{
conversationNotificationSender: NewConversationNotificationSender(&config.NotificationConfig, msgClient),
msgRpcClient: &msgRpcClient,
user: &userRpcClient,
conversationNotificationSender: NewConversationNotificationSender(&config.NotificationConfig, &msgRpcClient),
groupRpcClient: &groupRpcClient,
conversationDatabase: controller.NewConversationDatabase(conversationDB,
redis.NewConversationRedis(rdb, &config.LocalCacheConfig, redis.GetRocksCacheOptions(), conversationDB), mgocli.GetTx()),
userClient: rpcli.NewUserClient(userConn),
groupClient: rpcli.NewGroupClient(groupConn),
msgClient: msgClient,
})
return nil
}
@@ -136,12 +122,13 @@ func (c *conversationServer) GetSortedConversationList(ctx context.Context, req
if len(conversations) == 0 {
return nil, errs.ErrRecordNotFound.Wrap()
}
maxSeqs, err := c.msgClient.GetMaxSeqs(ctx, conversationIDs)
maxSeqs, err := c.msgRpcClient.GetMaxSeqs(ctx, conversationIDs)
if err != nil {
return nil, err
}
chatLogs, err := c.msgClient.GetMsgByConversationIDs(ctx, conversationIDs, maxSeqs)
chatLogs, err := c.msgRpcClient.GetMsgByConversationIDs(ctx, conversationIDs, maxSeqs)
if err != nil {
return nil, err
}
@@ -151,7 +138,7 @@ func (c *conversationServer) GetSortedConversationList(ctx context.Context, req
return nil, err
}
hasReadSeqs, err := c.msgClient.GetHasReadSeqs(ctx, conversationIDs, req.UserID)
hasReadSeqs, err := c.msgRpcClient.GetHasReadSeqs(ctx, req.UserID, conversationIDs)
if err != nil {
return nil, err
}
@@ -240,7 +227,7 @@ func (c *conversationServer) SetConversations(ctx context.Context, req *pbconver
}
if req.Conversation.ConversationType == constant.WriteGroupChatType {
groupInfo, err := c.groupClient.GetGroupInfo(ctx, req.Conversation.GroupID)
groupInfo, err := c.groupRpcClient.GetGroupInfo(ctx, req.Conversation.GroupID)
if err != nil {
return nil, err
}
@@ -364,15 +351,7 @@ func (c *conversationServer) SetConversations(ctx context.Context, req *pbconver
needUpdateUsersList = append(needUpdateUsersList, userID)
}
}
if len(m) != 0 && len(needUpdateUsersList) != 0 {
if err := c.conversationDatabase.SetUsersConversationFieldTx(ctx, needUpdateUsersList, &conversation, m); err != nil {
return nil, err
}
for _, v := range needUpdateUsersList {
c.conversationNotificationSender.ConversationChangeNotification(ctx, v, []string{req.Conversation.ConversationID})
}
}
if req.Conversation.IsPrivateChat != nil && req.Conversation.ConversationType != constant.ReadGroupChatType {
var conversations []*dbModel.Conversation
for _, ownerUserID := range req.UserIDs {
@@ -390,6 +369,16 @@ func (c *conversationServer) SetConversations(ctx context.Context, req *pbconver
c.conversationNotificationSender.ConversationSetPrivateNotification(ctx, userID, req.Conversation.UserID,
req.Conversation.IsPrivateChat.Value, req.Conversation.ConversationID)
}
} else {
if len(m) != 0 && len(needUpdateUsersList) != 0 {
if err := c.conversationDatabase.SetUsersConversationFieldTx(ctx, needUpdateUsersList, &conversation, m); err != nil {
return nil, err
}
for _, v := range needUpdateUsersList {
c.conversationNotificationSender.ConversationChangeNotification(ctx, v, []string{req.Conversation.ConversationID})
}
}
}
return &pbconversation.SetConversationsResp{}, nil
@@ -443,38 +432,22 @@ func (c *conversationServer) CreateGroupChatConversations(ctx context.Context, r
if err != nil {
return nil, err
}
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, req.GroupID)
if err := c.msgClient.SetUserConversationMaxSeq(ctx, conversationID, req.UserIDs, 0); err != nil {
return nil, err
}
return &pbconversation.CreateGroupChatConversationsResp{}, nil
}
func (c *conversationServer) SetConversationMaxSeq(ctx context.Context, req *pbconversation.SetConversationMaxSeqReq) (*pbconversation.SetConversationMaxSeqResp, error) {
if err := c.msgClient.SetUserConversationMaxSeq(ctx, req.ConversationID, req.OwnerUserID, req.MaxSeq); err != nil {
return nil, err
}
if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID,
map[string]any{"max_seq": req.MaxSeq}); err != nil {
return nil, err
}
for _, userID := range req.OwnerUserID {
c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.ConversationID})
}
return &pbconversation.SetConversationMaxSeqResp{}, nil
}
func (c *conversationServer) SetConversationMinSeq(ctx context.Context, req *pbconversation.SetConversationMinSeqReq) (*pbconversation.SetConversationMinSeqResp, error) {
if err := c.msgClient.SetUserConversationMin(ctx, req.ConversationID, req.OwnerUserID, req.MinSeq); err != nil {
return nil, err
}
if err := c.conversationDatabase.UpdateUsersConversationField(ctx, req.OwnerUserID, req.ConversationID,
map[string]any{"min_seq": req.MinSeq}); err != nil {
return nil, err
}
for _, userID := range req.OwnerUserID {
c.conversationNotificationSender.ConversationChangeNotification(ctx, userID, []string{req.ConversationID})
}
return &pbconversation.SetConversationMinSeqResp{}, nil
}
@@ -575,7 +548,7 @@ func (c *conversationServer) getConversationInfo(
}
}
if len(sendIDs) != 0 {
sendInfos, err := c.userClient.GetUsersInfo(ctx, sendIDs)
sendInfos, err := c.user.GetUsersInfo(ctx, sendIDs)
if err != nil {
return nil, err
}
@@ -584,7 +557,7 @@ func (c *conversationServer) getConversationInfo(
}
}
if len(groupIDs) != 0 {
groupInfos, err := c.groupClient.GetGroupsInfo(ctx, groupIDs)
groupInfos, err := c.groupRpcClient.GetGroupInfos(ctx, groupIDs, false)
if err != nil {
return nil, err
}
@@ -696,7 +669,7 @@ func (c *conversationServer) GetOwnerConversation(ctx context.Context, req *pbco
}, nil
}
func (c *conversationServer) GetConversationsNeedClearMsg(ctx context.Context, _ *pbconversation.GetConversationsNeedClearMsgReq) (*pbconversation.GetConversationsNeedClearMsgResp, error) {
func (c *conversationServer) GetConversationsNeedDestructMsgs(ctx context.Context, _ *pbconversation.GetConversationsNeedDestructMsgsReq) (*pbconversation.GetConversationsNeedDestructMsgsResp, error) {
num, err := c.conversationDatabase.GetAllConversationIDsNumber(ctx)
if err != nil {
log.ZError(ctx, "GetAllConversationIDsNumber failed", err)
@@ -720,7 +693,7 @@ func (c *conversationServer) GetConversationsNeedClearMsg(ctx context.Context, _
conversationIDs, err := c.conversationDatabase.PageConversationIDs(ctx, pagination)
if err != nil {
log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber)
// log.ZError(ctx, "PageConversationIDs failed", err, "pageNumber", pageNumber)
continue
}
@@ -743,7 +716,7 @@ func (c *conversationServer) GetConversationsNeedClearMsg(ctx context.Context, _
}
}
return &pbconversation.GetConversationsNeedClearMsgResp{Conversations: convert.ConversationsDB2Pb(temp)}, nil
return &pbconversation.GetConversationsNeedDestructMsgsResp{Conversations: convert.ConversationsDB2Pb(temp)}, nil
}
func (c *conversationServer) GetNotNotifyConversationIDs(ctx context.Context, req *pbconversation.GetNotNotifyConversationIDsReq) (*pbconversation.GetNotNotifyConversationIDsResp, error) {
@@ -761,51 +734,3 @@ func (c *conversationServer) GetPinnedConversationIDs(ctx context.Context, req *
}
return &pbconversation.GetPinnedConversationIDsResp{ConversationIDs: conversationIDs}, nil
}
func (c *conversationServer) ClearUserConversationMsg(ctx context.Context, req *pbconversation.ClearUserConversationMsgReq) (*pbconversation.ClearUserConversationMsgResp, error) {
conversations, err := c.conversationDatabase.FindRandConversation(ctx, req.Timestamp, int(req.Limit))
if err != nil {
return nil, err
}
latestMsgDestructTime := time.UnixMilli(req.Timestamp)
for i, conversation := range conversations {
if conversation.IsMsgDestruct == false || conversation.MsgDestructTime == 0 {
continue
}
seq, err := c.msgClient.GetLastMessageSeqByTime(ctx, conversation.ConversationID, req.Timestamp-(conversation.MsgDestructTime*1000))
if err != nil {
return nil, err
}
if seq <= 0 {
log.ZDebug(ctx, "ClearUserConversationMsg GetLastMessageSeqByTime seq <= 0", "index", i, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID, "msgDestructTime", conversation.MsgDestructTime, "seq", seq)
if err := c.setConversationMinSeqAndLatestMsgDestructTime(ctx, conversation.ConversationID, conversation.OwnerUserID, -1, latestMsgDestructTime); err != nil {
return nil, err
}
continue
}
seq++
if err := c.setConversationMinSeqAndLatestMsgDestructTime(ctx, conversation.ConversationID, conversation.OwnerUserID, seq, latestMsgDestructTime); err != nil {
return nil, err
}
log.ZDebug(ctx, "ClearUserConversationMsg set min seq", "index", i, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID, "seq", seq, "msgDestructTime", conversation.MsgDestructTime)
}
return &pbconversation.ClearUserConversationMsgResp{Count: int32(len(conversations))}, nil
}
func (c *conversationServer) setConversationMinSeqAndLatestMsgDestructTime(ctx context.Context, conversationID string, ownerUserID string, minSeq int64, latestMsgDestructTime time.Time) error {
update := map[string]any{
"latest_msg_destruct_time": latestMsgDestructTime,
}
if minSeq >= 0 {
if err := c.msgClient.SetUserConversationMin(ctx, conversationID, []string{ownerUserID}, minSeq); err != nil {
return err
}
update["min_seq"] = minSeq
}
if err := c.conversationDatabase.UpdateUsersConversationField(ctx, []string{ownerUserID}, conversationID, update); err != nil {
return err
}
c.conversationNotificationSender.ConversationChangeNotification(ctx, ownerUserID, []string{conversationID})
return nil
}
+3 -7
View File
@@ -16,11 +16,9 @@ package conversation
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws"
)
@@ -29,10 +27,8 @@ type ConversationNotificationSender struct {
*rpcclient.NotificationSender
}
func NewConversationNotificationSender(conf *config.Notification, msgClient *rpcli.MsgClient) *ConversationNotificationSender {
return &ConversationNotificationSender{rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
}))}
func NewConversationNotificationSender(conf *config.Notification, msgRpcClient *rpcclient.MessageRpcClient) *ConversationNotificationSender {
return &ConversationNotificationSender{rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(msgRpcClient))}
}
// SetPrivate invote.
+1 -1
View File
@@ -110,7 +110,7 @@ func UpdateGroupMemberMap(req *pbgroup.SetGroupMemberInfo) map[string]any {
m["nickname"] = req.Nickname.Value
}
if req.FaceURL != nil {
m["face_url"] = req.FaceURL.Value
m["user_group_face_url"] = req.FaceURL.Value
}
if req.RoleLevel != nil {
m["role_level"] = req.RoleLevel.Value
+64 -50
View File
@@ -17,7 +17,6 @@ package group
import (
"context"
"fmt"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"math/big"
"math/rand"
"strconv"
@@ -37,7 +36,9 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/notification/grouphash"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/grouphash"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
"github.com/openimsdk/protocol/constant"
pbconversation "github.com/openimsdk/protocol/conversation"
pbgroup "github.com/openimsdk/protocol/group"
@@ -56,14 +57,13 @@ import (
)
type groupServer struct {
pbgroup.UnimplementedGroupServer
db controller.GroupDatabase
notification *NotificationSender
config *Config
webhookClient *webhook.Client
userClient *rpcli.UserClient
msgClient *rpcli.MsgClient
conversationClient *rpcli.ConversationClient
db controller.GroupDatabase
user rpcclient.UserRpcClient
notification *GroupNotificationSender
conversationRpcClient rpcclient.ConversationRpcClient
msgRpcClient rpcclient.MessageRpcClient
config *Config
webhookClient *webhook.Client
}
type Config struct {
@@ -98,33 +98,32 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil {
return err
}
//userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
//msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
//conversationRpcClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return err
}
gs := groupServer{
config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
userClient: rpcli.NewUserClient(userConn),
msgClient: rpcli.NewMsgClient(msgConn),
conversationClient: rpcli.NewConversationClient(conversationConn),
}
gs.db = controller.NewGroupDatabase(rdb, &config.LocalCacheConfig, groupDB, groupMemberDB, groupRequestDB, mgocli.GetTx(), grouphash.NewGroupHashFromGroupServer(&gs))
gs.notification = NewNotificationSender(gs.db, config, gs.userClient, gs.msgClient, gs.conversationClient)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
conversationRpcClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
var gs groupServer
database := controller.NewGroupDatabase(rdb, &config.LocalCacheConfig, groupDB, groupMemberDB, groupRequestDB, mgocli.GetTx(), grouphash.NewGroupHashFromGroupServer(&gs))
gs.db = database
gs.user = userRpcClient
gs.notification = NewGroupNotificationSender(
database,
&msgRpcClient,
&userRpcClient,
&conversationRpcClient,
config,
func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error) {
users, err := userRpcClient.GetUsersInfo(ctx, userIDs)
if err != nil {
return nil, err
}
return datautil.Slice(users, func(e *sdkws.UserInfo) notification.CommonUser { return e }), nil
},
)
localcache.InitLocalCache(&config.LocalCacheConfig)
gs.conversationRpcClient = conversationRpcClient
gs.msgRpcClient = msgRpcClient
gs.config = config
gs.webhookClient = webhook.NewWebhookClient(config.WebhooksConfig.URL)
pbgroup.RegisterGroupServer(server, &gs)
return nil
}
@@ -168,6 +167,19 @@ func (g *groupServer) CheckGroupAdmin(ctx context.Context, groupID string) error
return nil
}
func (g *groupServer) GetPublicUserInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.PublicUserInfo, error) {
if len(userIDs) == 0 {
return map[string]*sdkws.PublicUserInfo{}, nil
}
users, err := g.user.GetPublicUserInfos(ctx, userIDs)
if err != nil {
return nil, err
}
return datautil.SliceToMapAny(users, func(e *sdkws.PublicUserInfo) (string, *sdkws.PublicUserInfo) {
return e.UserID, e
}), nil
}
func (g *groupServer) IsNotFound(err error) bool {
return errs.ErrRecordNotFound.Is(specialerror.ErrCode(errs.Unwrap(err)))
}
@@ -209,6 +221,7 @@ func (g *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupR
return nil, errs.ErrArgs.WrapMsg("no group owner")
}
if err := authverify.CheckAccessV3(ctx, req.OwnerUserID, g.config.Share.IMAdminUserID); err != nil {
return nil, err
}
userIDs := append(append(req.MemberUserIDs, req.AdminUserIDs...), req.OwnerUserID)
@@ -221,7 +234,7 @@ func (g *groupServer) CreateGroup(ctx context.Context, req *pbgroup.CreateGroupR
return nil, errs.ErrArgs.WrapMsg("group member repeated")
}
userMap, err := g.userClient.GetUsersInfoMap(ctx, userIDs)
userMap, err := g.user.GetUsersInfoMap(ctx, userIDs)
if err != nil {
return nil, err
}
@@ -372,7 +385,7 @@ func (g *groupServer) InviteUserToGroup(ctx context.Context, req *pbgroup.Invite
return nil, servererrs.ErrDismissedAlready.WrapMsg("group dismissed checking group status found it dismissed")
}
userMap, err := g.userClient.GetUsersInfoMap(ctx, req.InvitedUserIDs)
userMap, err := g.user.GetUsersInfoMap(ctx, req.InvitedUserIDs)
if err != nil {
return nil, err
}
@@ -683,7 +696,7 @@ func (g *groupServer) GetGroupApplicationList(ctx context.Context, req *pbgroup.
userIDs = append(userIDs, gr.UserID)
}
userIDs = datautil.Distinct(userIDs)
userMap, err := g.userClient.GetUsersInfoMap(ctx, userIDs)
userMap, err := g.user.GetPublicUserInfoMap(ctx, userIDs)
if err != nil {
return nil, err
}
@@ -795,7 +808,7 @@ func (g *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
} else if !g.IsNotFound(err) {
return nil, err
}
if err := g.userClient.CheckUser(ctx, []string{req.FromUserID}); err != nil {
if _, err := g.user.GetPublicUserInfo(ctx, req.FromUserID); err != nil {
return nil, err
}
var member *model.GroupMember
@@ -839,7 +852,7 @@ func (g *groupServer) GroupApplicationResponse(ctx context.Context, req *pbgroup
}
func (g *groupServer) JoinGroup(ctx context.Context, req *pbgroup.JoinGroupReq) (*pbgroup.JoinGroupResp, error) {
user, err := g.userClient.GetUserInfo(ctx, req.InviterUserID)
user, err := g.user.GetUserInfo(ctx, req.InviterUserID)
if err != nil {
return nil, err
}
@@ -945,12 +958,12 @@ func (g *groupServer) QuitGroup(ctx context.Context, req *pbgroup.QuitGroupReq)
}
func (g *groupServer) deleteMemberAndSetConversationSeq(ctx context.Context, groupID string, userIDs []string) error {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
conevrsationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgRpcClient.GetConversationMaxSeq(ctx, conevrsationID)
if err != nil {
return err
}
return g.conversationClient.SetConversationMaxSeq(ctx, conversationID, userIDs, maxSeq)
return g.conversationRpcClient.SetConversationMaxSeq(ctx, userIDs, conevrsationID, maxSeq)
}
func (g *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInfoReq) (*pbgroup.SetGroupInfoResp, error) {
@@ -1026,7 +1039,7 @@ func (g *groupServer) SetGroupInfo(ctx context.Context, req *pbgroup.SetGroupInf
return
}
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
if err := g.conversationClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
if err := g.conversationRpcClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation)
}
}()
@@ -1139,7 +1152,8 @@ func (g *groupServer) SetGroupInfoEx(ctx context.Context, req *pbgroup.SetGroupI
}
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.GroupNotification}
if err := g.conversationClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
if err := g.conversationRpcClient.SetConversations(ctx, resp.UserIDs, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, "UserIDs", resp.UserIDs, "conversation", conversation)
}
}()
@@ -1291,7 +1305,7 @@ func (g *groupServer) GetGroupMembersCMS(ctx context.Context, req *pbgroup.GetGr
}
func (g *groupServer) GetUserReqApplicationList(ctx context.Context, req *pbgroup.GetUserReqApplicationListReq) (*pbgroup.GetUserReqApplicationListResp, error) {
user, err := g.userClient.GetUserInfo(ctx, req.UserID)
user, err := g.user.GetPublicUserInfo(ctx, req.UserID)
if err != nil {
return nil, err
}
@@ -1747,7 +1761,7 @@ func (g *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
return nil, servererrs.ErrGroupIDNotFound.WrapMsg(strings.Join(ids, ","))
}
userMap, err := g.userClient.GetUsersInfoMap(ctx, req.UserIDs)
userMap, err := g.user.GetPublicUserInfoMap(ctx, req.UserIDs)
if err != nil {
return nil, err
}
@@ -1778,7 +1792,7 @@ func (g *groupServer) GetGroupUsersReqApplicationList(ctx context.Context, req *
ownerUserID = owner.UserID
}
var userInfo *sdkws.UserInfo
var userInfo *sdkws.PublicUserInfo
if user, ok := userMap[e.UserID]; !ok {
userInfo = user
}
@@ -1824,7 +1838,7 @@ func (g *groupServer) GetSpecifiedUserGroupRequestInfo(ctx context.Context, req
return nil, err
}
userInfos, err := g.userClient.GetUsersInfo(ctx, []string{req.UserID})
userInfos, err := g.user.GetPublicUserInfos(ctx, []string{req.UserID})
if err != nil {
return nil, err
}
+75 -69
View File
@@ -18,10 +18,6 @@ import (
"context"
"errors"
"fmt"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
@@ -30,8 +26,8 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/versionctx"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/notification/common_user"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
"github.com/openimsdk/protocol/constant"
pbgroup "github.com/openimsdk/protocol/group"
"github.com/openimsdk/protocol/msg"
@@ -42,6 +38,7 @@ import (
"github.com/openimsdk/tools/utils/datautil"
"github.com/openimsdk/tools/utils/stringutil"
"go.mongodb.org/mongo-driver/mongo"
"time"
)
// GroupApplicationReceiver
@@ -50,38 +47,36 @@ const (
adminReceiver
)
func NewNotificationSender(db controller.GroupDatabase, config *Config, userClient *rpcli.UserClient, msgClient *rpcli.MsgClient, conversationClient *rpcli.ConversationClient) *NotificationSender {
return &NotificationSender{
NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig,
rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
}),
rpcclient.WithUserRpcClient(userClient.GetUserInfo),
),
getUsersInfo: func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error) {
users, err := userClient.GetUsersInfo(ctx, userIDs)
if err != nil {
return nil, err
}
return datautil.Slice(users, func(e *sdkws.UserInfo) common_user.CommonUser { return e }), nil
},
func NewGroupNotificationSender(
db controller.GroupDatabase,
msgRpcClient *rpcclient.MessageRpcClient,
userRpcClient *rpcclient.UserRpcClient,
conversationRpcClient *rpcclient.ConversationRpcClient,
config *Config,
fn func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error),
) *GroupNotificationSender {
return &GroupNotificationSender{
NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithRpcClient(msgRpcClient), rpcclient.WithUserRpcClient(userRpcClient)),
getUsersInfo: fn,
db: db,
config: config,
msgClient: msgClient,
conversationClient: conversationClient,
conversationRpcClient: conversationRpcClient,
msgRpcClient: msgRpcClient,
}
}
type NotificationSender struct {
type GroupNotificationSender struct {
*rpcclient.NotificationSender
getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
db controller.GroupDatabase
config *Config
msgClient *rpcli.MsgClient
conversationClient *rpcli.ConversationClient
getUsersInfo func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error)
db controller.GroupDatabase
config *Config
conversationRpcClient *rpcclient.ConversationRpcClient
msgRpcClient *rpcclient.MessageRpcClient
}
func (g *NotificationSender) PopulateGroupMember(ctx context.Context, members ...*model.GroupMember) error {
func (g *GroupNotificationSender) PopulateGroupMember(ctx context.Context, members ...*model.GroupMember) error {
if len(members) == 0 {
return nil
}
@@ -96,7 +91,7 @@ func (g *NotificationSender) PopulateGroupMember(ctx context.Context, members ..
if err != nil {
return err
}
userMap := make(map[string]common_user.CommonUser)
userMap := make(map[string]notification.CommonUser)
for i, user := range users {
userMap[user.GetUserID()] = users[i]
}
@@ -116,7 +111,7 @@ func (g *NotificationSender) PopulateGroupMember(ctx context.Context, members ..
return nil
}
func (g *NotificationSender) getUser(ctx context.Context, userID string) (*sdkws.PublicUserInfo, error) {
func (g *GroupNotificationSender) getUser(ctx context.Context, userID string) (*sdkws.PublicUserInfo, error) {
users, err := g.getUsersInfo(ctx, []string{userID})
if err != nil {
return nil, err
@@ -132,7 +127,7 @@ func (g *NotificationSender) getUser(ctx context.Context, userID string) (*sdkws
}, nil
}
func (g *NotificationSender) getGroupInfo(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) {
func (g *GroupNotificationSender) getGroupInfo(ctx context.Context, groupID string) (*sdkws.GroupInfo, error) {
gm, err := g.db.TakeGroup(ctx, groupID)
if err != nil {
return nil, err
@@ -153,7 +148,7 @@ func (g *NotificationSender) getGroupInfo(ctx context.Context, groupID string) (
return convert.Db2PbGroupInfo(gm, ownerUserID, num), nil
}
func (g *NotificationSender) getGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
func (g *GroupNotificationSender) getGroupMembers(ctx context.Context, groupID string, userIDs []string) ([]*sdkws.GroupMemberFullInfo, error) {
members, err := g.db.FindGroupMembers(ctx, groupID, userIDs)
if err != nil {
return nil, err
@@ -169,7 +164,7 @@ func (g *NotificationSender) getGroupMembers(ctx context.Context, groupID string
return res, nil
}
func (g *NotificationSender) getGroupMemberMap(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) {
func (g *GroupNotificationSender) getGroupMemberMap(ctx context.Context, groupID string, userIDs []string) (map[string]*sdkws.GroupMemberFullInfo, error) {
members, err := g.getGroupMembers(ctx, groupID, userIDs)
if err != nil {
return nil, err
@@ -181,7 +176,7 @@ func (g *NotificationSender) getGroupMemberMap(ctx context.Context, groupID stri
return m, nil
}
func (g *NotificationSender) getGroupMember(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) {
func (g *GroupNotificationSender) getGroupMember(ctx context.Context, groupID string, userID string) (*sdkws.GroupMemberFullInfo, error) {
members, err := g.getGroupMembers(ctx, groupID, []string{userID})
if err != nil {
return nil, err
@@ -192,7 +187,7 @@ func (g *NotificationSender) getGroupMember(ctx context.Context, groupID string,
return members[0], nil
}
func (g *NotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, groupID string) ([]string, error) {
func (g *GroupNotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, groupID string) ([]string, error) {
members, err := g.db.FindGroupMemberRoleLevels(ctx, groupID, []int32{constant.GroupOwner, constant.GroupAdmin})
if err != nil {
return nil, err
@@ -204,7 +199,7 @@ func (g *NotificationSender) getGroupOwnerAndAdminUserID(ctx context.Context, gr
return datautil.Slice(members, fn), nil
}
func (g *NotificationSender) groupMemberDB2PB(member *model.GroupMember, appMangerLevel int32) *sdkws.GroupMemberFullInfo {
func (g *GroupNotificationSender) groupMemberDB2PB(member *model.GroupMember, appMangerLevel int32) *sdkws.GroupMemberFullInfo {
return &sdkws.GroupMemberFullInfo{
GroupID: member.GroupID,
UserID: member.UserID,
@@ -221,7 +216,7 @@ func (g *NotificationSender) groupMemberDB2PB(member *model.GroupMember, appMang
}
}
/* func (g *NotificationSender) getUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
/* func (g *GroupNotificationSender) getUsersInfoMap(ctx context.Context, userIDs []string) (map[string]*sdkws.UserInfo, error) {
users, err := g.getUsersInfo(ctx, userIDs)
if err != nil {
return nil, err
@@ -233,11 +228,11 @@ func (g *NotificationSender) groupMemberDB2PB(member *model.GroupMember, appMang
return result, nil
} */
func (g *NotificationSender) fillOpUser(ctx context.Context, opUser **sdkws.GroupMemberFullInfo, groupID string) (err error) {
func (g *GroupNotificationSender) fillOpUser(ctx context.Context, opUser **sdkws.GroupMemberFullInfo, groupID string) (err error) {
return g.fillOpUserByUserID(ctx, mcontext.GetOpUserID(ctx), opUser, groupID)
}
func (g *NotificationSender) fillOpUserByUserID(ctx context.Context, userID string, opUser **sdkws.GroupMemberFullInfo, groupID string) error {
func (g *GroupNotificationSender) fillOpUserByUserID(ctx context.Context, userID string, opUser **sdkws.GroupMemberFullInfo, groupID string) error {
if opUser == nil {
return errs.ErrInternalServer.WrapMsg("**sdkws.GroupMemberFullInfo is nil")
}
@@ -281,7 +276,7 @@ func (g *NotificationSender) fillOpUserByUserID(ctx context.Context, userID stri
return nil
}
func (g *NotificationSender) setVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string) {
func (g *GroupNotificationSender) setVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string) {
versions := versionctx.GetVersionLog(ctx).Get()
for _, coll := range versions {
if coll.Name == collName && coll.Doc.DID == id {
@@ -292,7 +287,7 @@ func (g *NotificationSender) setVersion(ctx context.Context, version *uint64, ve
}
}
func (g *NotificationSender) setSortVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string, sortVersion *uint64) {
func (g *GroupNotificationSender) setSortVersion(ctx context.Context, version *uint64, versionID *string, collName string, id string, sortVersion *uint64) {
versions := versionctx.GetVersionLog(ctx).Get()
for _, coll := range versions {
if coll.Name == collName && coll.Doc.DID == id {
@@ -307,7 +302,7 @@ func (g *NotificationSender) setSortVersion(ctx context.Context, version *uint64
}
}
func (g *NotificationSender) GroupCreatedNotification(ctx context.Context, tips *sdkws.GroupCreatedTips) {
func (g *GroupNotificationSender) GroupCreatedNotification(ctx context.Context, tips *sdkws.GroupCreatedTips) {
var err error
defer func() {
if err != nil {
@@ -321,7 +316,7 @@ func (g *NotificationSender) GroupCreatedNotification(ctx context.Context, tips
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupCreatedNotification, tips)
}
func (g *NotificationSender) GroupInfoSetNotification(ctx context.Context, tips *sdkws.GroupInfoSetTips) {
func (g *GroupNotificationSender) GroupInfoSetNotification(ctx context.Context, tips *sdkws.GroupInfoSetTips) {
var err error
defer func() {
if err != nil {
@@ -335,7 +330,7 @@ func (g *NotificationSender) GroupInfoSetNotification(ctx context.Context, tips
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNotification, tips, rpcclient.WithRpcGetUserName())
}
func (g *NotificationSender) GroupInfoSetNameNotification(ctx context.Context, tips *sdkws.GroupInfoSetNameTips) {
func (g *GroupNotificationSender) GroupInfoSetNameNotification(ctx context.Context, tips *sdkws.GroupInfoSetNameTips) {
var err error
defer func() {
if err != nil {
@@ -349,7 +344,7 @@ func (g *NotificationSender) GroupInfoSetNameNotification(ctx context.Context, t
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetNameNotification, tips)
}
func (g *NotificationSender) GroupInfoSetAnnouncementNotification(ctx context.Context, tips *sdkws.GroupInfoSetAnnouncementTips) {
func (g *GroupNotificationSender) GroupInfoSetAnnouncementNotification(ctx context.Context, tips *sdkws.GroupInfoSetAnnouncementTips) {
var err error
defer func() {
if err != nil {
@@ -363,7 +358,7 @@ func (g *NotificationSender) GroupInfoSetAnnouncementNotification(ctx context.Co
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupInfoSetAnnouncementNotification, tips, rpcclient.WithRpcGetUserName())
}
func (g *NotificationSender) JoinGroupApplicationNotification(ctx context.Context, req *pbgroup.JoinGroupReq) {
func (g *GroupNotificationSender) JoinGroupApplicationNotification(ctx context.Context, req *pbgroup.JoinGroupReq) {
var err error
defer func() {
if err != nil {
@@ -391,7 +386,7 @@ func (g *NotificationSender) JoinGroupApplicationNotification(ctx context.Contex
}
}
func (g *NotificationSender) MemberQuitNotification(ctx context.Context, member *sdkws.GroupMemberFullInfo) {
func (g *GroupNotificationSender) MemberQuitNotification(ctx context.Context, member *sdkws.GroupMemberFullInfo) {
var err error
defer func() {
if err != nil {
@@ -408,7 +403,7 @@ func (g *NotificationSender) MemberQuitNotification(ctx context.Context, member
g.Notification(ctx, mcontext.GetOpUserID(ctx), member.GroupID, constant.MemberQuitNotification, tips)
}
func (g *NotificationSender) GroupApplicationAcceptedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
func (g *GroupNotificationSender) GroupApplicationAcceptedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
var err error
defer func() {
if err != nil {
@@ -441,7 +436,7 @@ func (g *NotificationSender) GroupApplicationAcceptedNotification(ctx context.Co
}
}
func (g *NotificationSender) GroupApplicationRejectedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
func (g *GroupNotificationSender) GroupApplicationRejectedNotification(ctx context.Context, req *pbgroup.GroupApplicationResponseReq) {
var err error
defer func() {
if err != nil {
@@ -474,7 +469,7 @@ func (g *NotificationSender) GroupApplicationRejectedNotification(ctx context.Co
}
}
func (g *NotificationSender) GroupOwnerTransferredNotification(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) {
func (g *GroupNotificationSender) GroupOwnerTransferredNotification(ctx context.Context, req *pbgroup.TransferGroupOwnerReq) {
var err error
defer func() {
if err != nil {
@@ -505,7 +500,7 @@ func (g *NotificationSender) GroupOwnerTransferredNotification(ctx context.Conte
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupOwnerTransferredNotification, tips)
}
func (g *NotificationSender) MemberKickedNotification(ctx context.Context, tips *sdkws.MemberKickedTips) {
func (g *GroupNotificationSender) MemberKickedNotification(ctx context.Context, tips *sdkws.MemberKickedTips) {
var err error
defer func() {
if err != nil {
@@ -519,7 +514,7 @@ func (g *NotificationSender) MemberKickedNotification(ctx context.Context, tips
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.MemberKickedNotification, tips)
}
func (g *NotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx context.Context, groupID string, invitedOpUserID string, entrantUserID ...string) error {
func (g *GroupNotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx context.Context, groupID string, invitedOpUserID string, entrantUserID ...string) error {
var err error
defer func() {
if err != nil {
@@ -529,15 +524,20 @@ func (g *NotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx co
if !g.config.RpcConfig.EnableHistoryForNewMembers {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
maxSeq, err := g.msgRpcClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil {
return err
}
if err := g.msgClient.SetUserConversationsMinSeq(ctx, conversationID, entrantUserID, maxSeq+1); err != nil {
if _, err = g.msgRpcClient.SetUserConversationsMinSeq(ctx, &msg.SetUserConversationsMinSeqReq{
UserIDs: entrantUserID,
ConversationID: conversationID,
Seq: maxSeq,
}); err != nil {
return err
}
}
if err := g.conversationClient.CreateGroupChatConversations(ctx, groupID, entrantUserID); err != nil {
if err := g.conversationRpcClient.GroupChatFirstCreateConversation(ctx, groupID, entrantUserID); err != nil {
return err
}
@@ -573,7 +573,7 @@ func (g *NotificationSender) GroupApplicationAgreeMemberEnterNotification(ctx co
return nil
}
func (g *NotificationSender) MemberEnterNotification(ctx context.Context, groupID string, entrantUserID string) error {
func (g *GroupNotificationSender) MemberEnterNotification(ctx context.Context, groupID string, entrantUserID string) error {
var err error
defer func() {
if err != nil {
@@ -583,17 +583,23 @@ func (g *NotificationSender) MemberEnterNotification(ctx context.Context, groupI
if !g.config.RpcConfig.EnableHistoryForNewMembers {
conversationID := msgprocessor.GetConversationIDBySessionType(constant.ReadGroupChatType, groupID)
maxSeq, err := g.msgClient.GetConversationMaxSeq(ctx, conversationID)
maxSeq, err := g.msgRpcClient.GetConversationMaxSeq(ctx, conversationID)
if err != nil {
return err
}
if err := g.msgClient.SetUserConversationsMinSeq(ctx, conversationID, []string{entrantUserID}, maxSeq+1); err != nil {
if _, err = g.msgRpcClient.SetUserConversationsMinSeq(ctx, &msg.SetUserConversationsMinSeqReq{
UserIDs: []string{entrantUserID},
ConversationID: conversationID,
Seq: maxSeq,
}); err != nil {
return err
}
}
if err := g.conversationClient.CreateGroupChatConversations(ctx, groupID, []string{entrantUserID}); err != nil {
if err := g.conversationRpcClient.GroupChatFirstCreateConversation(ctx, groupID, []string{entrantUserID}); err != nil {
return err
}
var group *sdkws.GroupInfo
group, err = g.getGroupInfo(ctx, groupID)
if err != nil {
@@ -614,7 +620,7 @@ func (g *NotificationSender) MemberEnterNotification(ctx context.Context, groupI
return nil
}
func (g *NotificationSender) GroupDismissedNotification(ctx context.Context, tips *sdkws.GroupDismissedTips) {
func (g *GroupNotificationSender) GroupDismissedNotification(ctx context.Context, tips *sdkws.GroupDismissedTips) {
var err error
defer func() {
if err != nil {
@@ -627,7 +633,7 @@ func (g *NotificationSender) GroupDismissedNotification(ctx context.Context, tip
g.Notification(ctx, mcontext.GetOpUserID(ctx), tips.Group.GroupID, constant.GroupDismissedNotification, tips)
}
func (g *NotificationSender) GroupMemberMutedNotification(ctx context.Context, groupID, groupMemberUserID string, mutedSeconds uint32) {
func (g *GroupNotificationSender) GroupMemberMutedNotification(ctx context.Context, groupID, groupMemberUserID string, mutedSeconds uint32) {
var err error
defer func() {
if err != nil {
@@ -655,7 +661,7 @@ func (g *NotificationSender) GroupMemberMutedNotification(ctx context.Context, g
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberMutedNotification, tips)
}
func (g *NotificationSender) GroupMemberCancelMutedNotification(ctx context.Context, groupID, groupMemberUserID string) {
func (g *GroupNotificationSender) GroupMemberCancelMutedNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error
defer func() {
if err != nil {
@@ -680,7 +686,7 @@ func (g *NotificationSender) GroupMemberCancelMutedNotification(ctx context.Cont
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberCancelMutedNotification, tips)
}
func (g *NotificationSender) GroupMutedNotification(ctx context.Context, groupID string) {
func (g *GroupNotificationSender) GroupMutedNotification(ctx context.Context, groupID string) {
var err error
defer func() {
if err != nil {
@@ -708,7 +714,7 @@ func (g *NotificationSender) GroupMutedNotification(ctx context.Context, groupID
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMutedNotification, tips)
}
func (g *NotificationSender) GroupCancelMutedNotification(ctx context.Context, groupID string) {
func (g *GroupNotificationSender) GroupCancelMutedNotification(ctx context.Context, groupID string) {
var err error
defer func() {
if err != nil {
@@ -736,7 +742,7 @@ func (g *NotificationSender) GroupCancelMutedNotification(ctx context.Context, g
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupCancelMutedNotification, tips)
}
func (g *NotificationSender) GroupMemberInfoSetNotification(ctx context.Context, groupID, groupMemberUserID string) {
func (g *GroupNotificationSender) GroupMemberInfoSetNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error
defer func() {
if err != nil {
@@ -761,7 +767,7 @@ func (g *NotificationSender) GroupMemberInfoSetNotification(ctx context.Context,
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberInfoSetNotification, tips)
}
func (g *NotificationSender) GroupMemberSetToAdminNotification(ctx context.Context, groupID, groupMemberUserID string) {
func (g *GroupNotificationSender) GroupMemberSetToAdminNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error
defer func() {
if err != nil {
@@ -785,7 +791,7 @@ func (g *NotificationSender) GroupMemberSetToAdminNotification(ctx context.Conte
g.Notification(ctx, mcontext.GetOpUserID(ctx), group.GroupID, constant.GroupMemberSetToAdminNotification, tips)
}
func (g *NotificationSender) GroupMemberSetToOrdinaryUserNotification(ctx context.Context, groupID, groupMemberUserID string) {
func (g *GroupNotificationSender) GroupMemberSetToOrdinaryUserNotification(ctx context.Context, groupID, groupMemberUserID string) {
var err error
defer func() {
if err != nil {
+1 -1
View File
@@ -137,7 +137,7 @@ func (m *msgServer) MarkConversationAsRead(ctx context.Context, req *msg.MarkCon
return nil, err
}
hasReadSeq, err := m.MsgDatabase.GetHasReadSeq(ctx, req.UserID, req.ConversationID)
if err != nil && !errors.Is(err, redis.Nil) {
if err != nil && errors.Is(err, redis.Nil) {
return nil, err
}
var seqs []int64
+105 -40
View File
@@ -2,59 +2,124 @@ package msg
import (
"context"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/wrapperspb"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
"strings"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/utils/idutil"
"github.com/openimsdk/tools/utils/stringutil"
"golang.org/x/sync/errgroup"
)
// DestructMsgs hard delete in Database.
func (m *msgServer) DestructMsgs(ctx context.Context, req *msg.DestructMsgsReq) (*msg.DestructMsgsResp, error) {
// hard delete in Database.
func (m *msgServer) ClearMsg(ctx context.Context, req *msg.ClearMsgReq) (_ *msg.ClearMsgResp, err error) {
if err := authverify.CheckAdmin(ctx, m.config.Share.IMAdminUserID); err != nil {
return nil, err
}
docs, err := m.MsgDatabase.GetRandBeforeMsg(ctx, req.Timestamp, int(req.Limit))
if req.Timestamp > time.Now().UnixMilli() {
return nil, errs.ErrArgs.WrapMsg("request millisecond timestamp error")
}
var (
docNum int
msgNum int
start = time.Now()
)
clearMsg := func(ctx context.Context) (bool, error) {
docIDs, err := m.MsgDatabase.GetDocIDs(ctx)
if err != nil {
return false, err
}
msgs, err := m.MsgDatabase.GetBeforeMsg(ctx, req.Timestamp, docIDs, 5000)
if err != nil {
return false, err
}
if len(msgs) == 0 {
return false, nil
}
for _, msg := range msgs {
index, err := m.MsgDatabase.DeleteDocMsgBefore(ctx, req.Timestamp, msg)
if err != nil {
return false, err
}
if len(index) == 0 {
return false, errs.ErrInternalServer.WrapMsg("delete doc msg failed")
}
docNum++
msgNum += len(index)
}
return true, nil
}
_, err = clearMsg(ctx)
if err != nil {
log.ZError(ctx, "clear msg failed", err, "docNum", docNum, "msgNum", msgNum, "cost", time.Since(start))
return nil, err
}
for i, doc := range docs {
if err := m.MsgDatabase.DeleteDoc(ctx, doc.DocID); err != nil {
return nil, err
}
log.ZDebug(ctx, "DestructMsgs delete doc", "index", i, "docID", doc.DocID)
index := strings.LastIndex(doc.DocID, ":")
if index < 0 {
continue
}
var minSeq int64
for _, model := range doc.Msg {
if model.Msg == nil {
continue
}
if model.Msg.Seq > minSeq {
minSeq = model.Msg.Seq
}
}
if minSeq <= 0 {
continue
}
conversationID := doc.DocID[:index]
if conversationID == "" {
continue
}
minSeq++
if err := m.MsgDatabase.SetMinSeq(ctx, conversationID, minSeq); err != nil {
return nil, err
}
log.ZDebug(ctx, "DestructMsgs delete doc set min seq", "index", i, "docID", doc.DocID, "conversationID", conversationID, "setMinSeq", minSeq)
}
return &msg.DestructMsgsResp{Count: int32(len(docs))}, nil
log.ZDebug(ctx, "clearing message", "docNum", docNum, "msgNum", msgNum, "cost", time.Since(start))
return &msg.ClearMsgResp{}, nil
}
func (m *msgServer) GetLastMessageSeqByTime(ctx context.Context, req *msg.GetLastMessageSeqByTimeReq) (*msg.GetLastMessageSeqByTimeResp, error) {
seq, err := m.MsgDatabase.GetLastMessageSeqByTime(ctx, req.ConversationID, req.Time)
if err != nil {
// soft delete for self
func (m *msgServer) DestructMsgs(ctx context.Context, req *msg.DestructMsgsReq) (_ *msg.DestructMsgsResp, err error) {
temp := convert.ConversationsPb2DB(req.Conversations)
batchNum := 100
errg, _ := errgroup.WithContext(ctx)
errg.SetLimit(100)
for i := 0; i < len(temp); i += batchNum {
batch := temp[i:min(i+batchNum, len(temp))]
errg.Go(func() error {
for _, conversation := range batch {
handleCtx := mcontext.NewCtx(stringutil.GetSelfFuncName() + "-" + idutil.OperationIDGenerator() + "-" + conversation.ConversationID + "-" + conversation.OwnerUserID)
log.ZDebug(handleCtx, "User MsgsDestruct",
"conversationID", conversation.ConversationID,
"ownerUserID", conversation.OwnerUserID,
"msgDestructTime", conversation.MsgDestructTime,
"lastMsgDestructTime", conversation.LatestMsgDestructTime)
seqs, err := m.MsgDatabase.UserMsgsDestruct(handleCtx, conversation.OwnerUserID, conversation.ConversationID, conversation.MsgDestructTime, conversation.LatestMsgDestructTime)
if err != nil {
log.ZError(handleCtx, "user msg destruct failed", err, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID)
continue
}
if len(seqs) > 0 {
if err := m.Conversation.UpdateConversation(handleCtx,
&pbconversation.UpdateConversationReq{
UserIDs: []string{conversation.OwnerUserID},
ConversationID: conversation.ConversationID,
LatestMsgDestructTime: wrapperspb.Int64(time.Now().UnixMilli())}); err != nil {
log.ZError(handleCtx, "updateUsersConversationField failed", err, "conversationID", conversation.ConversationID, "ownerUserID", conversation.OwnerUserID)
continue
}
// if you need Notify SDK client userseq is update.
// m.msgNotificationSender.UserDeleteMsgsNotification(handleCtx, conversation.OwnerUserID, conversation.ConversationID, seqs)
}
}
return nil
})
}
if err := errg.Wait(); err != nil {
return nil, err
}
return &msg.GetLastMessageSeqByTimeResp{Seq: seq}, nil
return nil, nil
}
+15 -14
View File
@@ -74,13 +74,19 @@ func (m *msgServer) DeleteMsgs(ctx context.Context, req *msg.DeleteMsgsReq) (*ms
if err := m.MsgDatabase.DeleteMsgsPhysicalBySeqs(ctx, req.ConversationID, req.Seqs); err != nil {
return nil, err
}
conv, err := m.conversationClient.GetConversationsByConversationID(ctx, req.ConversationID)
conversations, err := m.Conversation.GetConversationsByConversationID(ctx, []string{req.ConversationID})
if err != nil {
return nil, err
}
tips := &sdkws.DeleteMsgsTips{UserID: req.UserID, ConversationID: req.ConversationID, Seqs: req.Seqs}
m.notificationSender.NotificationWithSessionType(ctx, req.UserID, m.conversationAndGetRecvID(conv, req.UserID),
constant.DeleteMsgsNotification, conv.ConversationType, tips)
m.notificationSender.NotificationWithSessionType(
ctx,
req.UserID,
m.conversationAndGetRecvID(conversations[0], req.UserID),
constant.DeleteMsgsNotification,
conversations[0].ConversationType,
tips,
)
} else {
if err := m.MsgDatabase.DeleteUserMsgsBySeqs(ctx, req.UserID, req.ConversationID, req.Seqs); err != nil {
return nil, err
@@ -106,14 +112,16 @@ func (m *msgServer) DeleteMsgPhysical(ctx context.Context, req *msg.DeleteMsgPhy
return nil, err
}
remainTime := timeutil.GetCurrentTimestampBySecond() - req.Timestamp
if _, err := m.DestructMsgs(ctx, &msg.DestructMsgsReq{Timestamp: remainTime, Limit: 9999}); err != nil {
return nil, err
for _, conversationID := range req.ConversationIDs {
if err := m.MsgDatabase.DeleteConversationMsgsAndSetMinSeq(ctx, conversationID, remainTime); err != nil {
log.ZWarn(ctx, "DeleteConversationMsgsAndSetMinSeq error", err, "conversationID", conversationID, "err", err)
}
}
return &msg.DeleteMsgPhysicalResp{}, nil
}
func (m *msgServer) clearConversation(ctx context.Context, conversationIDs []string, userID string, deleteSyncOpt *msg.DeleteSyncOpt) error {
conversations, err := m.conversationClient.GetConversationsByConversationIDs(ctx, conversationIDs)
conversations, err := m.Conversation.GetConversationsByConversationID(ctx, conversationIDs)
if err != nil {
return err
}
@@ -130,16 +138,9 @@ func (m *msgServer) clearConversation(ctx context.Context, conversationIDs []str
}
isSyncSelf, isSyncOther := m.validateDeleteSyncOpt(deleteSyncOpt)
if !isSyncOther {
setSeqs := m.getMinSeqs(maxSeqs)
if err := m.MsgDatabase.SetUserConversationsMinSeqs(ctx, userID, setSeqs); err != nil {
if err := m.MsgDatabase.SetUserConversationsMinSeqs(ctx, userID, m.getMinSeqs(maxSeqs)); err != nil {
return err
}
ownerUserIDs := []string{userID}
for conversationID, seq := range setSeqs {
if err := m.conversationClient.SetConversationMinSeq(ctx, conversationID, ownerUserIDs, seq); err != nil {
return err
}
}
// notification 2 self
if isSyncSelf {
tips := &sdkws.ClearConversationTips{UserID: userID, ConversationIDs: existConversationIDs}
+5 -1
View File
@@ -17,7 +17,7 @@ package msg
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws"
)
@@ -48,3 +48,7 @@ func (m *MsgNotificationSender) MarkAsReadNotification(ctx context.Context, conv
}
m.NotificationWithSessionType(ctx, sendID, recvID, constant.HasReadReceipt, sessionType, tips)
}
func (m *MsgNotificationSender) StreamMsgNotification(ctx context.Context, sendID string, recvID string, sessionType int32, tips *sdkws.StreamMsgTips) {
m.NotificationWithSessionType(ctx, sendID, recvID, constant.StreamMsgNotification, sessionType, tips)
}
+5 -9
View File
@@ -17,9 +17,8 @@ package msg
import (
"context"
"encoding/json"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
@@ -64,8 +63,7 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
log.ZDebug(ctx, "GetMsgBySeqs", "conversationID", req.ConversationID, "seq", req.Seq, "msg", string(data))
var role int32
if !authverify.IsAppManagerUid(ctx, m.config.Share.IMAdminUserID) {
sessionType := msgs[0].SessionType
switch sessionType {
switch msgs[0].SessionType {
case constant.SingleChatType:
if err := authverify.CheckAccessV3(ctx, msgs[0].SendID, m.config.Share.IMAdminUserID); err != nil {
return nil, err
@@ -80,10 +78,8 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
switch members[req.UserID].RoleLevel {
case constant.GroupOwner:
case constant.GroupAdmin:
if sendMember, ok := members[msgs[0].SendID]; ok {
if sendMember.RoleLevel != constant.GroupOrdinaryUsers {
return nil, errs.ErrNoPermission.WrapMsg("no permission")
}
if members[msgs[0].SendID].RoleLevel != constant.GroupOrdinaryUsers {
return nil, errs.ErrNoPermission.WrapMsg("no permission")
}
default:
return nil, errs.ErrNoPermission.WrapMsg("no permission")
@@ -93,7 +89,7 @@ func (m *msgServer) RevokeMsg(ctx context.Context, req *msg.RevokeMsgReq) (*msg.
role = member.RoleLevel
}
default:
return nil, errs.ErrInternalServer.WrapMsg("msg sessionType not supported", "sessionType", sessionType)
return nil, errs.ErrInternalServer.WrapMsg("msg sessionType not supported")
}
}
now := time.Now().UnixMilli()
+22 -16
View File
@@ -16,6 +16,7 @@ package msg
import (
"context"
"github.com/openimsdk/tools/mw"
"github.com/openimsdk/open-im-server/v3/pkg/common/prommetrics"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
@@ -34,6 +35,11 @@ import (
func (m *msgServer) SendMsg(ctx context.Context, req *pbmsg.SendMsgReq) (*pbmsg.SendMsgResp, error) {
if req.MsgData != nil {
m.encapsulateMsgData(req.MsgData)
if req.MsgData.ContentType == constant.Stream {
if err := m.handlerStreamMsg(ctx, req.MsgData); err != nil {
return nil, err
}
}
switch req.MsgData.SessionType {
case constant.SingleChatType:
return m.sendMsgSingleChat(ctx, req)
@@ -83,7 +89,7 @@ func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgDa
defer func() {
if r := recover(); r != nil {
log.ZPanic(nctx, "setConversationAtInfo Panic", errs.ErrPanic(r))
mw.PanicStackToLog(nctx, r)
}
}()
@@ -96,14 +102,14 @@ func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgDa
ConversationType: msg.SessionType,
GroupID: msg.GroupID,
}
memberUserIDList, err := m.GroupLocalCache.GetGroupMemberIDs(ctx, msg.GroupID)
if err != nil {
log.ZWarn(ctx, "GetGroupMemberIDs", err)
return
}
tagAll := datautil.Contain(constant.AtAllString, msg.AtUserIDList...)
if tagAll {
memberUserIDList, err := m.GroupLocalCache.GetGroupMemberIDs(ctx, msg.GroupID)
if err != nil {
log.ZWarn(ctx, "GetGroupMemberIDs", err)
return
}
memberUserIDList = datautil.DeleteElems(memberUserIDList, msg.SendID)
@@ -113,29 +119,29 @@ func (m *msgServer) setConversationAtInfo(nctx context.Context, msg *sdkws.MsgDa
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll}
} else { // @Everyone and @other people
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAllAtMe}
atUserID = datautil.SliceIntersectFuncs(atUserID, memberUserIDList, func(a string) string { return a }, func(b string) string {
return b
})
if err := m.conversationClient.SetConversations(ctx, atUserID, conversation); err != nil {
err = m.Conversation.SetConversations(ctx, atUserID, conversation)
if err != nil {
log.ZWarn(ctx, "SetConversations", err, "userID", atUserID, "conversation", conversation)
}
memberUserIDList = datautil.Single(atUserID, memberUserIDList)
}
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtAll}
if err := m.conversationClient.SetConversations(ctx, memberUserIDList, conversation); err != nil {
err = m.Conversation.SetConversations(ctx, memberUserIDList, conversation)
if err != nil {
log.ZWarn(ctx, "SetConversations", err, "userID", memberUserIDList, "conversation", conversation)
}
return
}
atUserID = datautil.SliceIntersectFuncs(msg.AtUserIDList, memberUserIDList, func(a string) string { return a }, func(b string) string {
return b
})
conversation.GroupAtType = &wrapperspb.Int32Value{Value: constant.AtMe}
if err := m.conversationClient.SetConversations(ctx, atUserID, conversation); err != nil {
log.ZWarn(ctx, "SetConversations", err, atUserID, conversation)
err := m.Conversation.SetConversations(ctx, msg.AtUserIDList, conversation)
if err != nil {
log.ZWarn(ctx, "SetConversations", err, msg.AtUserIDList, conversation)
}
}
-18
View File
@@ -84,21 +84,3 @@ func (m *msgServer) GetActiveConversation(ctx context.Context, req *pbmsg.GetAct
}
return &pbmsg.GetActiveConversationResp{Conversations: conversations}, nil
}
func (m *msgServer) SetUserConversationMaxSeq(ctx context.Context, req *pbmsg.SetUserConversationMaxSeqReq) (*pbmsg.SetUserConversationMaxSeqResp, error) {
for _, userID := range req.OwnerUserID {
if err := m.MsgDatabase.SetUserConversationsMaxSeq(ctx, req.ConversationID, userID, req.MaxSeq); err != nil {
return nil, err
}
}
return &pbmsg.SetUserConversationMaxSeqResp{}, nil
}
func (m *msgServer) SetUserConversationMinSeq(ctx context.Context, req *pbmsg.SetUserConversationMinSeqReq) (*pbmsg.SetUserConversationMinSeqResp, error) {
for _, userID := range req.OwnerUserID {
if err := m.MsgDatabase.SetUserConversationsMinSeq(ctx, req.ConversationID, userID, req.MinSeq); err != nil {
return nil, err
}
}
return &pbmsg.SetUserConversationMinSeqResp{}, nil
}
+46 -54
View File
@@ -16,7 +16,6 @@ package msg
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
@@ -27,8 +26,8 @@ import (
"github.com/openimsdk/tools/db/redisutil"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpccache"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg"
@@ -37,38 +36,39 @@ import (
)
type MessageInterceptorFunc func(ctx context.Context, globalConfig *Config, req *msg.SendMsgReq) (*sdkws.MsgData, error)
type (
// MessageInterceptorChain defines a chain of message interceptor functions.
MessageInterceptorChain []MessageInterceptorFunc
// MessageInterceptorChain defines a chain of message interceptor functions.
type MessageInterceptorChain []MessageInterceptorFunc
// MsgServer encapsulates dependencies required for message handling.
msgServer struct {
RegisterCenter discovery.SvcDiscoveryRegistry // Service discovery registry for service registration.
MsgDatabase controller.CommonMsgDatabase // Interface for message database operations.
StreamMsgDatabase controller.StreamMsgDatabase
Conversation *rpcclient.ConversationRpcClient // RPC client for conversation service.
UserLocalCache *rpccache.UserLocalCache // Local cache for user data.
FriendLocalCache *rpccache.FriendLocalCache // Local cache for friend data.
GroupLocalCache *rpccache.GroupLocalCache // Local cache for group data.
ConversationLocalCache *rpccache.ConversationLocalCache // Local cache for conversation data.
Handlers MessageInterceptorChain // Chain of handlers for processing messages.
notificationSender *rpcclient.NotificationSender // RPC client for sending notifications.
msgNotificationSender *MsgNotificationSender // RPC client for sending msg notifications.
config *Config // Global configuration settings.
webhookClient *webhook.Client
}
type Config struct {
RpcConfig config.Msg
RedisConfig config.Redis
MongodbConfig config.Mongo
KafkaConfig config.Kafka
NotificationConfig config.Notification
Share config.Share
WebhooksConfig config.Webhooks
LocalCacheConfig config.LocalCache
Discovery config.Discovery
}
// MsgServer encapsulates dependencies required for message handling.
type msgServer struct {
msg.UnimplementedMsgServer
RegisterCenter discovery.SvcDiscoveryRegistry // Service discovery registry for service registration.
MsgDatabase controller.CommonMsgDatabase // Interface for message database operations.
UserLocalCache *rpccache.UserLocalCache // Local cache for user data.
FriendLocalCache *rpccache.FriendLocalCache // Local cache for friend data.
GroupLocalCache *rpccache.GroupLocalCache // Local cache for group data.
ConversationLocalCache *rpccache.ConversationLocalCache // Local cache for conversation data.
Handlers MessageInterceptorChain // Chain of handlers for processing messages.
notificationSender *rpcclient.NotificationSender // RPC client for sending notifications.
msgNotificationSender *MsgNotificationSender // RPC client for sending msg notifications.
config *Config // Global configuration settings.
webhookClient *webhook.Client
conversationClient *rpcli.ConversationClient
}
Config struct {
RpcConfig config.Msg
RedisConfig config.Redis
MongodbConfig config.Mongo
KafkaConfig config.Kafka
NotificationConfig config.Notification
Share config.Share
WebhooksConfig config.Webhooks
LocalCacheConfig config.LocalCache
Discovery config.Discovery
}
)
func (m *msgServer) addInterceptorHandler(interceptorFunc ...MessageInterceptorFunc) {
m.Handlers = append(m.Handlers, interceptorFunc...)
@@ -88,7 +88,11 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil {
return err
}
msgModel := redis.NewMsgCache(rdb, msgDocModel)
msgModel := redis.NewMsgCache(rdb)
conversationClient := rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation)
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
friendRpcClient := rpcclient.NewFriendRpcClient(client, config.Share.RpcRegisterName.Friend)
seqConversation, err := mgo.NewSeqConversationMongo(mgocli.GetDB())
if err != nil {
return err
@@ -98,38 +102,26 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil {
return err
}
streamMsg, err := mgo.NewStreamMsgMongo(mgocli.GetDB())
if err != nil {
return err
}
seqUserCache := redis.NewSeqUserCacheRedis(rdb, seqUser)
msgDatabase, err := controller.NewCommonMsgDatabase(msgDocModel, msgModel, seqUserCache, seqConversationCache, &config.KafkaConfig)
if err != nil {
return err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return err
}
friendConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Friend)
if err != nil {
return err
}
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return err
}
conversationClient := rpcli.NewConversationClient(conversationConn)
s := &msgServer{
Conversation: &conversationClient,
MsgDatabase: msgDatabase,
StreamMsgDatabase: controller.NewStreamMsgDatabase(streamMsg),
RegisterCenter: client,
UserLocalCache: rpccache.NewUserLocalCache(rpcli.NewUserClient(userConn), &config.LocalCacheConfig, rdb),
GroupLocalCache: rpccache.NewGroupLocalCache(rpcli.NewGroupClient(groupConn), &config.LocalCacheConfig, rdb),
UserLocalCache: rpccache.NewUserLocalCache(userRpcClient, &config.LocalCacheConfig, rdb),
GroupLocalCache: rpccache.NewGroupLocalCache(groupRpcClient, &config.LocalCacheConfig, rdb),
ConversationLocalCache: rpccache.NewConversationLocalCache(conversationClient, &config.LocalCacheConfig, rdb),
FriendLocalCache: rpccache.NewFriendLocalCache(rpcli.NewRelationClient(friendConn), &config.LocalCacheConfig, rdb),
FriendLocalCache: rpccache.NewFriendLocalCache(friendRpcClient, &config.LocalCacheConfig, rdb),
config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
conversationClient: conversationClient,
}
s.notificationSender = rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithLocalSendMsg(s.SendMsg))
+114
View File
@@ -0,0 +1,114 @@
package msg
import (
"context"
"fmt"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/msgprocessor"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/tools/errs"
"time"
)
const StreamDeadlineTime = time.Second * 60 * 10
func (m *msgServer) handlerStreamMsg(ctx context.Context, msgData *sdkws.MsgData) error {
now := time.Now()
val := &model.StreamMsg{
ClientMsgID: msgData.ClientMsgID,
ConversationID: msgprocessor.GetConversationIDByMsg(msgData),
UserID: msgData.SendID,
CreateTime: now,
DeadlineTime: now.Add(StreamDeadlineTime),
}
return m.StreamMsgDatabase.CreateStreamMsg(ctx, val)
}
func (m *msgServer) getStreamMsg(ctx context.Context, clientMsgID string) (*model.StreamMsg, error) {
res, err := m.StreamMsgDatabase.GetStreamMsg(ctx, clientMsgID)
if err != nil {
return nil, err
}
now := time.Now()
if !res.End && res.DeadlineTime.Before(now) {
res.End = true
res.DeadlineTime = now
_ = m.StreamMsgDatabase.AppendStreamMsg(ctx, res.ClientMsgID, 0, nil, true, now)
}
return res, nil
}
func (m *msgServer) AppendStreamMsg(ctx context.Context, req *msg.AppendStreamMsgReq) (*msg.AppendStreamMsgResp, error) {
res, err := m.getStreamMsg(ctx, req.ClientMsgID)
if err != nil {
return nil, err
}
if res.End {
return nil, errs.ErrNoPermission.WrapMsg("stream msg is end")
}
if len(res.Packets) < int(req.StartIndex) {
return nil, errs.ErrNoPermission.WrapMsg("start index is invalid")
}
if val := len(res.Packets) - int(req.StartIndex); val > 0 {
exist := res.Packets[int(req.StartIndex):]
for i, s := range exist {
if len(req.Packets) == 0 {
break
}
if s != req.Packets[i] {
return nil, errs.ErrNoPermission.WrapMsg(fmt.Sprintf("packet %d has been written and is inconsistent", i))
}
req.StartIndex++
req.Packets = req.Packets[1:]
}
}
if len(req.Packets) == 0 && res.End == req.End {
return &msg.AppendStreamMsgResp{}, nil
}
deadlineTime := time.Now().Add(StreamDeadlineTime)
if err := m.StreamMsgDatabase.AppendStreamMsg(ctx, req.ClientMsgID, int(req.StartIndex), req.Packets, req.End, deadlineTime); err != nil {
return nil, err
}
conversation, err := m.Conversation.GetConversation(ctx, res.UserID, res.ConversationID)
if err != nil {
return nil, err
}
tips := &sdkws.StreamMsgTips{
ConversationID: res.ConversationID,
ClientMsgID: res.ClientMsgID,
StartIndex: req.StartIndex,
Packets: req.Packets,
End: req.End,
}
var (
recvID string
sessionType int32
)
if conversation.GroupID == "" {
sessionType = constant.SingleChatType
recvID = conversation.UserID
} else {
sessionType = constant.ReadGroupChatType
recvID = conversation.GroupID
}
m.msgNotificationSender.StreamMsgNotification(ctx, res.UserID, recvID, sessionType, tips)
return &msg.AppendStreamMsgResp{}, nil
}
func (m *msgServer) GetStreamMsg(ctx context.Context, req *msg.GetStreamMsgReq) (*msg.GetStreamMsgResp, error) {
res, err := m.getStreamMsg(ctx, req.ClientMsgID)
if err != nil {
return nil, err
}
return &msg.GetStreamMsgResp{
ClientMsgID: res.ClientMsgID,
ConversationID: res.ConversationID,
UserID: res.UserID,
Packets: res.Packets,
End: res.End,
CreateTime: res.CreateTime.UnixMilli(),
DeadlineTime: res.DeadlineTime.UnixMilli(),
}, nil
}
+1 -11
View File
@@ -92,7 +92,7 @@ func (m *msgServer) GetSeqMessage(ctx context.Context, req *msg.GetSeqMessageReq
NotificationMsgs: make(map[string]*sdkws.PullMsgs),
}
for _, conv := range req.Conversations {
isEnd, endSeq, msgs, err := m.MsgDatabase.GetMessagesBySeqWithBounds(ctx, req.UserID, conv.ConversationID, conv.Seqs, req.GetOrder())
_, _, msgs, err := m.MsgDatabase.GetMsgBySeqs(ctx, req.UserID, conv.ConversationID, conv.Seqs)
if err != nil {
return nil, err
}
@@ -111,8 +111,6 @@ func (m *msgServer) GetSeqMessage(ctx context.Context, req *msg.GetSeqMessageReq
}
}
pullMsgs.Msgs = append(pullMsgs.Msgs, msgs...)
pullMsgs.IsEnd = isEnd
pullMsgs.EndSeq = endSeq
}
return resp, nil
}
@@ -245,11 +243,3 @@ func (m *msgServer) SearchMessage(ctx context.Context, req *msg.SearchMessageReq
func (m *msgServer) GetServerTime(ctx context.Context, _ *msg.GetServerTimeReq) (*msg.GetServerTimeResp, error) {
return &msg.GetServerTimeResp{ServerTime: timeutil.GetCurrentTimestampByMill()}, nil
}
func (m *msgServer) GetLastMessage(ctx context.Context, req *msg.GetLastMessageReq) (*msg.GetLastMessageResp, error) {
msgs, err := m.MsgDatabase.GetLastMessage(ctx, req.ConversationIDs, req.UserID)
if err != nil {
return nil, err
}
return &msg.GetLastMessageResp{Msgs: msgs}, nil
}
+6 -17
View File
@@ -39,7 +39,7 @@ func (s *friendServer) GetPaginationBlacks(ctx context.Context, req *relation.Ge
return nil, err
}
resp = &relation.GetPaginationBlacksResp{}
resp.Blacks, err = convert.BlackDB2Pb(ctx, blacks, s.userClient.GetUsersInfoMap)
resp.Blacks, err = convert.BlackDB2Pb(ctx, blacks, s.userRpcClient.GetUsersInfoMap)
if err != nil {
return nil, err
}
@@ -81,7 +81,9 @@ func (s *friendServer) AddBlack(ctx context.Context, req *relation.AddBlackReq)
if err := s.webhookBeforeAddBlack(ctx, &s.config.WebhooksConfig.BeforeAddBlack, req); err != nil {
return nil, err
}
if err := s.userClient.CheckUser(ctx, []string{req.OwnerUserID, req.BlackUserID}); err != nil {
_, err := s.userRpcClient.GetUsersInfo(ctx, []string{req.OwnerUserID, req.BlackUserID})
if err != nil {
return nil, err
}
black := model.Black{
@@ -112,7 +114,7 @@ func (s *friendServer) GetSpecifiedBlacks(ctx context.Context, req *relation.Get
return nil, errs.ErrArgs.WrapMsg("userIDList repeated")
}
userMap, err := s.userClient.GetUsersInfoMap(ctx, req.UserIDList)
userMap, err := s.userRpcClient.GetPublicUserInfoMap(ctx, req.UserIDList)
if err != nil {
return nil, err
}
@@ -130,26 +132,13 @@ func (s *friendServer) GetSpecifiedBlacks(ctx context.Context, req *relation.Get
Blacks: make([]*sdkws.BlackInfo, 0, len(req.UserIDList)),
}
toPublcUser := func(userID string) *sdkws.PublicUserInfo {
v, ok := userMap[userID]
if !ok {
return nil
}
return &sdkws.PublicUserInfo{
UserID: v.UserID,
Nickname: v.Nickname,
FaceURL: v.FaceURL,
Ex: v.Ex,
}
}
for _, userID := range req.UserIDList {
if black := blackMap[userID]; black != nil {
resp.Blacks = append(resp.Blacks,
&sdkws.BlackInfo{
OwnerUserID: black.OwnerUserID,
CreateTime: black.CreateTime.UnixMilli(),
BlackUserInfo: toPublcUser(userID),
BlackUserInfo: userMap[userID],
AddSource: black.AddSource,
OperatorUserID: black.OperatorUserID,
Ex: black.Ex,
+45 -36
View File
@@ -16,7 +16,6 @@ package relation
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/tools/mq/memamq"
@@ -32,6 +31,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/sdkws"
@@ -43,15 +43,15 @@ import (
)
type friendServer struct {
relation.UnimplementedFriendServer
db controller.FriendDatabase
blackDatabase controller.BlackDatabase
notificationSender *FriendNotificationSender
RegisterCenter discovery.SvcDiscoveryRegistry
config *Config
webhookClient *webhook.Client
queue *memamq.MemoryQueue
userClient *rpcli.UserClient
db controller.FriendDatabase
blackDatabase controller.BlackDatabase
userRpcClient *rpcclient.UserRpcClient
notificationSender *FriendNotificationSender
conversationRpcClient rpcclient.ConversationRpcClient
RegisterCenter discovery.SvcDiscoveryRegistry
config *Config
webhookClient *webhook.Client
queue *memamq.MemoryQueue
}
type Config struct {
@@ -91,21 +91,15 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
return err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
userClient := rpcli.NewUserClient(userConn)
// Initialize RPC clients
userRpcClient := rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
// Initialize notification sender
notificationSender := NewFriendNotificationSender(
&config.NotificationConfig,
rpcli.NewMsgClient(msgConn),
WithRpcFunc(userClient.GetUsersInfo),
&msgRpcClient,
WithRpcFunc(userRpcClient.GetUsersInfo),
)
localcache.InitLocalCache(&config.LocalCacheConfig)
@@ -121,12 +115,13 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
blackMongoDB,
redis.NewBlackCacheRedis(rdb, &config.LocalCacheConfig, blackMongoDB, redis.GetRocksCacheOptions()),
),
notificationSender: notificationSender,
RegisterCenter: client,
config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
queue: memamq.NewMemoryQueue(16, 1024*1024),
userClient: userClient,
userRpcClient: &userRpcClient,
notificationSender: notificationSender,
RegisterCenter: client,
conversationRpcClient: rpcclient.NewConversationRpcClient(client, config.Share.RpcRegisterName.Conversation),
config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
queue: memamq.NewMemoryQueue(16, 1024*1024),
})
return nil
}
@@ -143,7 +138,7 @@ func (s *friendServer) ApplyToAddFriend(ctx context.Context, req *relation.Apply
if err = s.webhookBeforeAddFriend(ctx, &s.config.WebhooksConfig.BeforeAddFriend, req); err != nil && err != servererrs.ErrCallbackContinue {
return nil, err
}
if err := s.userClient.CheckUser(ctx, []string{req.ToUserID, req.FromUserID}); err != nil {
if _, err := s.userRpcClient.GetUsersInfoMap(ctx, []string{req.ToUserID, req.FromUserID}); err != nil {
return nil, err
}
@@ -167,8 +162,7 @@ func (s *friendServer) ImportFriends(ctx context.Context, req *relation.ImportFr
if err := authverify.CheckAdmin(ctx, s.config.Share.IMAdminUserID); err != nil {
return nil, err
}
if err := s.userClient.CheckUser(ctx, append([]string{req.OwnerUserID}, req.FriendUserIDs...)); err != nil {
if _, err := s.userRpcClient.GetUsersInfo(ctx, append([]string{req.OwnerUserID}, req.FriendUserIDs...)); err != nil {
return nil, err
}
if datautil.Contain(req.OwnerUserID, req.FriendUserIDs...) {
@@ -309,7 +303,7 @@ func (s *friendServer) getFriend(ctx context.Context, ownerUserID string, friend
if err != nil {
return nil, err
}
return convert.FriendsDB2Pb(ctx, friends, s.userClient.GetUsersInfoMap)
return convert.FriendsDB2Pb(ctx, friends, s.userRpcClient.GetUsersInfoMap)
}
// Get the list of friend requests sent out proactively.
@@ -321,7 +315,7 @@ func (s *friendServer) GetDesignatedFriendsApply(ctx context.Context,
return nil, err
}
resp = &relation.GetDesignatedFriendsApplyResp{}
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userClient.GetUsersInfoMap)
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userRpcClient.GetUsersInfoMap)
if err != nil {
return nil, err
}
@@ -340,7 +334,7 @@ func (s *friendServer) GetPaginationFriendsApplyTo(ctx context.Context, req *rel
}
resp = &relation.GetPaginationFriendsApplyToResp{}
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userClient.GetUsersInfoMap)
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userRpcClient.GetUsersInfoMap)
if err != nil {
return nil, err
}
@@ -362,7 +356,7 @@ func (s *friendServer) GetPaginationFriendsApplyFrom(ctx context.Context, req *r
return nil, err
}
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userClient.GetUsersInfoMap)
resp.FriendRequests, err = convert.FriendRequestDB2Pb(ctx, friendRequests, s.userRpcClient.GetUsersInfoMap)
if err != nil {
return nil, err
}
@@ -393,7 +387,7 @@ func (s *friendServer) GetPaginationFriends(ctx context.Context, req *relation.G
}
resp = &relation.GetPaginationFriendsResp{}
resp.FriendsInfo, err = convert.FriendsDB2Pb(ctx, friends, s.userClient.GetUsersInfoMap)
resp.FriendsInfo, err = convert.FriendsDB2Pb(ctx, friends, s.userRpcClient.GetUsersInfoMap)
if err != nil {
return nil, err
}
@@ -426,7 +420,7 @@ func (s *friendServer) GetSpecifiedFriendsInfo(ctx context.Context, req *relatio
return nil, errs.ErrArgs.WrapMsg("userIDList repeated")
}
userMap, err := s.userClient.GetUsersInfoMap(ctx, req.UserIDList)
userMap, err := s.userRpcClient.GetUsersInfoMap(ctx, req.UserIDList)
if err != nil {
return nil, err
}
@@ -530,3 +524,18 @@ func (s *friendServer) UpdateFriends(
s.notificationSender.FriendsInfoUpdateNotification(ctx, req.OwnerUserID, req.FriendUserIDs)
return resp, nil
}
func (s *friendServer) GetIncrementalFriendsApplyTo(ctx context.Context, req *relation.GetIncrementalFriendsApplyToReq) (*relation.GetIncrementalFriendsApplyToResp, error) {
// TODO implement me
return nil, nil
}
func (s *friendServer) GetIncrementalFriendsApplyFrom(ctx context.Context, req *relation.GetIncrementalFriendsApplyFromReq) (*relation.GetIncrementalFriendsApplyFromResp, error) {
// TODO implement me
return nil, nil
}
func (s *friendServer) GetIncrementalBlacks(ctx context.Context, req *relation.GetIncrementalBlacksReq) (*relation.GetIncrementalBlacksResp, error) {
// TODO implement me
return nil, nil
}
+11 -12
View File
@@ -16,9 +16,6 @@ package relation
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/versionctx"
@@ -27,8 +24,8 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/notification/common_user"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/relation"
"github.com/openimsdk/protocol/sdkws"
@@ -38,7 +35,7 @@ import (
type FriendNotificationSender struct {
*rpcclient.NotificationSender
// Target not found err
getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
getUsersInfo func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error)
// db controller
db controller.FriendDatabase
}
@@ -55,7 +52,7 @@ func WithDBFunc(
fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error),
) friendNotificationSenderOptions {
return func(s *FriendNotificationSender) {
f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
f := func(ctx context.Context, userIDs []string) (result []notification.CommonUser, err error) {
users, err := fn(ctx, userIDs)
if err != nil {
return nil, err
@@ -73,7 +70,7 @@ func WithRpcFunc(
fn func(ctx context.Context, userIDs []string) ([]*sdkws.UserInfo, error),
) friendNotificationSenderOptions {
return func(s *FriendNotificationSender) {
f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
f := func(ctx context.Context, userIDs []string) (result []notification.CommonUser, err error) {
users, err := fn(ctx, userIDs)
if err != nil {
return nil, err
@@ -87,11 +84,13 @@ func WithRpcFunc(
}
}
func NewFriendNotificationSender(conf *config.Notification, msgClient *rpcli.MsgClient, opts ...friendNotificationSenderOptions) *FriendNotificationSender {
func NewFriendNotificationSender(
conf *config.Notification,
msgRpcClient *rpcclient.MessageRpcClient,
opts ...friendNotificationSenderOptions,
) *FriendNotificationSender {
f := &FriendNotificationSender{
NotificationSender: rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
})),
NotificationSender: rpcclient.NewNotificationSender(conf, rpcclient.WithRpcClient(msgRpcClient)),
}
for _, opt := range opts {
opt(f)
+11 -10
View File
@@ -19,9 +19,10 @@ import (
"crypto/rand"
"time"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/errs"
@@ -49,14 +50,14 @@ func (t *thirdServer) UploadLogs(ctx context.Context, req *third.UploadLogsReq)
platform := constant.PlatformID2Name[int(req.Platform)]
for _, fileURL := range req.FileURLs {
log := relationtb.Log{
Platform: platform,
UserID: userID,
CreateTime: time.Now(),
Url: fileURL.URL,
FileName: fileURL.Filename,
AppFramework: req.AppFramework,
Version: req.Version,
Ex: req.Ex,
Platform: platform,
UserID: userID,
CreateTime: time.Now(),
Url: fileURL.URL,
FileName: fileURL.Filename,
SystemType: req.SystemType,
Version: req.Version,
Ex: req.Ex,
}
for i := 0; i < 20; i++ {
id := genLogID()
@@ -148,7 +149,7 @@ func (t *thirdServer) SearchLogs(ctx context.Context, req *third.SearchLogsReq)
for _, log := range logs {
userIDs = append(userIDs, log.UserID)
}
userMap, err := t.userClient.GetUsersInfoMap(ctx, userIDs)
userMap, err := t.userRpcClient.GetUsersInfoMap(ctx, userIDs)
if err != nil {
return nil, err
}
+41 -23
View File
@@ -19,14 +19,17 @@ import (
"encoding/base64"
"encoding/hex"
"encoding/json"
"github.com/openimsdk/open-im-server/v3/pkg/authverify"
"path"
"strconv"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"go.mongodb.org/mongo-driver/mongo"
"github.com/google/uuid"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/protocol/sdkws"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/errs"
"github.com/openimsdk/tools/log"
@@ -285,35 +288,50 @@ func (t *thirdServer) apiAddress(prefix, name string) string {
}
func (t *thirdServer) DeleteOutdatedData(ctx context.Context, req *third.DeleteOutdatedDataReq) (*third.DeleteOutdatedDataResp, error) {
if err := authverify.CheckAdmin(ctx, t.config.Share.IMAdminUserID); err != nil {
return nil, err
}
engine := t.config.RpcConfig.Object.Enable
var conf config.Third
expireTime := time.UnixMilli(req.ExpireTime)
// Find all expired data in S3 database
models, err := t.s3dataBase.FindExpirationObject(ctx, engine, expireTime, req.ObjectGroup, int64(req.Limit))
if err != nil {
return nil, err
var deltotal int
findPagination := &sdkws.RequestPagination{
PageNumber: 1,
ShowNumber: 1000,
}
for i, obj := range models {
if err := t.s3dataBase.DeleteSpecifiedData(ctx, engine, []string{obj.Name}); err != nil {
for {
total, models, err := t.s3dataBase.FindByExpires(ctx, expireTime, findPagination)
if err != nil && errs.Unwrap(err) != mongo.ErrNoDocuments {
return nil, errs.Wrap(err)
}
if err := t.s3dataBase.DelS3Key(ctx, engine, obj.Name); err != nil {
return nil, err
needDelObjectKeys := make([]string, 0)
for _, model := range models {
needDelObjectKeys = append(needDelObjectKeys, model.Key)
}
count, err := t.s3dataBase.GetKeyCount(ctx, engine, obj.Key)
if err != nil {
return nil, err
}
log.ZDebug(ctx, "delete s3 object record", "index", i, "s3", obj, "count", count)
if count == 0 {
if err := t.s3.DeleteObject(ctx, obj.Key); err != nil {
return nil, err
needDelObjectKeys = datautil.Distinct(needDelObjectKeys)
for _, key := range needDelObjectKeys {
count, err := t.s3dataBase.FindNotDelByS3(ctx, key, expireTime)
if err != nil && errs.Unwrap(err) != mongo.ErrNoDocuments {
return nil, errs.Wrap(err)
}
if int(count) < 1 && t.minio != nil {
thumbnailKey, _ := t.getMinioImageThumbnailKey(ctx, key)
t.s3dataBase.DeleteObject(ctx, thumbnailKey)
t.s3dataBase.DelS3Key(ctx, conf.Object.Enable, needDelObjectKeys...)
t.s3dataBase.DeleteObject(ctx, key)
}
}
for _, model := range models {
err := t.s3dataBase.DeleteSpecifiedData(ctx, model.Engine, model.Name)
if err != nil {
return nil, errs.Wrap(err)
}
}
if total < int64(findPagination.ShowNumber) {
break
}
deltotal += int(total)
}
return &third.DeleteOutdatedDataResp{Count: int32(len(models))}, nil
log.ZDebug(ctx, "DeleteOutdatedData", "delete Total", deltotal)
return &third.DeleteOutdatedDataResp{}, nil
}
type FormDataMate struct {
+16 -15
View File
@@ -17,14 +17,14 @@ package third
import (
"context"
"fmt"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/cache/redis"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/database/mgo"
"github.com/openimsdk/open-im-server/v3/pkg/localcache"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/db/mongoutil"
"github.com/openimsdk/tools/db/redisutil"
@@ -38,13 +38,12 @@ import (
)
type thirdServer struct {
third.UnimplementedThirdServer
thirdDatabase controller.ThirdDatabase
s3dataBase controller.S3Database
userRpcClient rpcclient.UserRpcClient
defaultExpire time.Duration
config *Config
s3 s3.Interface
userClient *rpcli.UserClient
minio *minio.Minio
}
type Config struct {
@@ -79,11 +78,13 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
// Select the oss method according to the profile policy
enable := config.RpcConfig.Object.Enable
var (
o s3.Interface
o s3.Interface
minioCli *minio.Minio
)
switch enable {
case "minio":
o, err = minio.NewMinio(ctx, redis.NewMinioCache(rdb), *config.MinioConfig.Build())
minioCli, err = minio.NewMinio(ctx, redis.NewMinioCache(rdb), *config.MinioConfig.Build())
o = minioCli
case "cos":
o, err = cos.NewCos(*config.RpcConfig.Object.Cos.Build())
case "oss":
@@ -96,22 +97,22 @@ func Start(ctx context.Context, config *Config, client discovery.SvcDiscoveryReg
if err != nil {
return err
}
userConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.User)
if err != nil {
return err
}
localcache.InitLocalCache(&config.LocalCacheConfig)
third.RegisterThirdServer(server, &thirdServer{
thirdDatabase: controller.NewThirdDatabase(redis.NewThirdCache(rdb), logdb),
userRpcClient: rpcclient.NewUserRpcClient(client, config.Share.RpcRegisterName.User, config.Share.IMAdminUserID),
s3dataBase: controller.NewS3Database(rdb, o, s3db),
defaultExpire: time.Hour * 24 * 7,
config: config,
s3: o,
userClient: rpcli.NewUserClient(userConn),
minio: minioCli,
})
return nil
}
func (t *thirdServer) getMinioImageThumbnailKey(ctx context.Context, name string) (string, error) {
return t.minio.GetImageThumbnailKey(ctx, name)
}
func (t *thirdServer) FcmUpdateToken(ctx context.Context, req *third.FcmUpdateTokenReq) (resp *third.FcmUpdateTokenResp, err error) {
err = t.thirdDatabase.FcmUpdateToken(ctx, req.Account, int(req.PlatformID), req.FcmToken, req.ExpireTime)
if err != nil {
+6 -11
View File
@@ -16,21 +16,18 @@ package user
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/msg"
relationtb "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/notification/common_user"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient/notification"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/notification"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws"
)
type UserNotificationSender struct {
*rpcclient.NotificationSender
getUsersInfo func(ctx context.Context, userIDs []string) ([]common_user.CommonUser, error)
getUsersInfo func(ctx context.Context, userIDs []string) ([]notification.CommonUser, error)
// db controller
db controller.UserDatabase
}
@@ -47,7 +44,7 @@ func WithUserFunc(
fn func(ctx context.Context, userIDs []string) (users []*relationtb.User, err error),
) userNotificationSenderOptions {
return func(u *UserNotificationSender) {
f := func(ctx context.Context, userIDs []string) (result []common_user.CommonUser, err error) {
f := func(ctx context.Context, userIDs []string) (result []notification.CommonUser, err error) {
users, err := fn(ctx, userIDs)
if err != nil {
return nil, err
@@ -61,11 +58,9 @@ func WithUserFunc(
}
}
func NewUserNotificationSender(config *Config, msgClient *rpcli.MsgClient, opts ...userNotificationSenderOptions) *UserNotificationSender {
func NewUserNotificationSender(config *Config, msgRpcClient *rpcclient.MessageRpcClient, opts ...userNotificationSenderOptions) *UserNotificationSender {
f := &UserNotificationSender{
NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithRpcClient(func(ctx context.Context, req *msg.SendMsgReq) (*msg.SendMsgResp, error) {
return msgClient.SendMsg(ctx, req)
})),
NotificationSender: rpcclient.NewNotificationSender(&config.NotificationConfig, rpcclient.WithRpcClient(msgRpcClient)),
}
for _, opt := range opts {
opt(f)
+23 -44
View File
@@ -31,7 +31,6 @@ import (
tablerelation "github.com/openimsdk/open-im-server/v3/pkg/common/storage/model"
"github.com/openimsdk/open-im-server/v3/pkg/common/webhook"
"github.com/openimsdk/open-im-server/v3/pkg/localcache"
"github.com/openimsdk/open-im-server/v3/pkg/rpcli"
"github.com/openimsdk/protocol/group"
friendpb "github.com/openimsdk/protocol/relation"
"github.com/openimsdk/tools/db/redisutil"
@@ -40,6 +39,7 @@ import (
"github.com/openimsdk/open-im-server/v3/pkg/common/convert"
"github.com/openimsdk/open-im-server/v3/pkg/common/servererrs"
"github.com/openimsdk/open-im-server/v3/pkg/common/storage/controller"
"github.com/openimsdk/open-im-server/v3/pkg/rpcclient"
"github.com/openimsdk/protocol/constant"
"github.com/openimsdk/protocol/sdkws"
pbuser "github.com/openimsdk/protocol/user"
@@ -52,16 +52,15 @@ import (
)
type userServer struct {
pbuser.UnimplementedUserServer
online cache.OnlineCache
db controller.UserDatabase
friendNotificationSender *relation.FriendNotificationSender
userNotificationSender *UserNotificationSender
friendRpcClient *rpcclient.FriendRpcClient
groupRpcClient *rpcclient.GroupRpcClient
RegisterCenter registry.SvcDiscoveryRegistry
config *Config
webhookClient *webhook.Client
groupClient *rpcli.GroupClient
relationClient *rpcli.RelationClient
}
type Config struct {
@@ -94,33 +93,22 @@ func Start(ctx context.Context, config *Config, client registry.SvcDiscoveryRegi
if err != nil {
return err
}
msgConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Msg)
if err != nil {
return err
}
groupConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Group)
if err != nil {
return err
}
friendConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Friend)
if err != nil {
return err
}
msgClient := rpcli.NewMsgClient(msgConn)
userCache := redis.NewUserCacheRedis(rdb, &config.LocalCacheConfig, userDB, redis.GetRocksCacheOptions())
database := controller.NewUserDatabase(userDB, userCache, mgocli.GetTx())
friendRpcClient := rpcclient.NewFriendRpcClient(client, config.Share.RpcRegisterName.Friend)
groupRpcClient := rpcclient.NewGroupRpcClient(client, config.Share.RpcRegisterName.Group)
msgRpcClient := rpcclient.NewMessageRpcClient(client, config.Share.RpcRegisterName.Msg)
localcache.InitLocalCache(&config.LocalCacheConfig)
u := &userServer{
online: redis.NewUserOnline(rdb),
db: database,
RegisterCenter: client,
friendNotificationSender: relation.NewFriendNotificationSender(&config.NotificationConfig, msgClient, relation.WithDBFunc(database.FindWithError)),
userNotificationSender: NewUserNotificationSender(config, msgClient, WithUserFunc(database.FindWithError)),
friendRpcClient: &friendRpcClient,
groupRpcClient: &groupRpcClient,
friendNotificationSender: relation.NewFriendNotificationSender(&config.NotificationConfig, &msgRpcClient, relation.WithDBFunc(database.FindWithError)),
userNotificationSender: NewUserNotificationSender(config, &msgRpcClient, WithUserFunc(database.FindWithError)),
config: config,
webhookClient: webhook.NewWebhookClient(config.WebhooksConfig.URL),
groupClient: rpcli.NewGroupClient(groupConn),
relationClient: rpcli.NewRelationClient(friendConn),
}
pbuser.RegisterUserServer(server, u)
return u.db.InitOnce(context.Background(), users)
@@ -480,9 +468,7 @@ func (s *userServer) AddNotificationAccount(ctx context.Context, req *pbuser.Add
if err := authverify.CheckAdmin(ctx, s.config.Share.IMAdminUserID); err != nil {
return nil, err
}
if req.AppMangerLevel < constant.AppNotificationAdmin {
return nil, errs.ErrArgs.WithDetail("app level not supported")
}
if req.UserID == "" {
for i := 0; i < 20; i++ {
userId := s.genUserID()
@@ -508,17 +494,16 @@ func (s *userServer) AddNotificationAccount(ctx context.Context, req *pbuser.Add
Nickname: req.NickName,
FaceURL: req.FaceURL,
CreateTime: time.Now(),
AppMangerLevel: req.AppMangerLevel,
AppMangerLevel: constant.AppNotificationAdmin,
}
if err := s.db.Create(ctx, []*tablerelation.User{user}); err != nil {
return nil, err
}
return &pbuser.AddNotificationAccountResp{
UserID: req.UserID,
NickName: req.NickName,
FaceURL: req.FaceURL,
AppMangerLevel: req.AppMangerLevel,
UserID: req.UserID,
NickName: req.NickName,
FaceURL: req.FaceURL,
}, nil
}
@@ -598,13 +583,8 @@ func (s *userServer) GetNotificationAccount(ctx context.Context, req *pbuser.Get
if err != nil {
return nil, servererrs.ErrUserIDNotFound.Wrap()
}
if user.AppMangerLevel == constant.AppAdmin || user.AppMangerLevel >= constant.AppNotificationAdmin {
return &pbuser.GetNotificationAccountResp{Account: &pbuser.NotificationAccountInfo{
UserID: user.UserID,
FaceURL: user.FaceURL,
NickName: user.Nickname,
AppMangerLevel: user.AppMangerLevel,
}}, nil
if user.AppMangerLevel == constant.AppAdmin || user.AppMangerLevel == constant.AppNotificationAdmin {
return &pbuser.GetNotificationAccountResp{}, nil
}
return nil, errs.ErrNoPermission.WrapMsg("notification messages cannot be sent for this ID")
@@ -629,12 +609,11 @@ func (s *userServer) userModelToResp(users []*tablerelation.User, pagination pag
accounts := make([]*pbuser.NotificationAccountInfo, 0)
var total int64
for _, v := range users {
if v.AppMangerLevel >= constant.AppNotificationAdmin && !datautil.Contain(v.UserID, s.config.Share.IMAdminUserID...) {
if v.AppMangerLevel == constant.AppNotificationAdmin && !datautil.Contain(v.UserID, s.config.Share.IMAdminUserID...) {
temp := &pbuser.NotificationAccountInfo{
UserID: v.UserID,
FaceURL: v.FaceURL,
NickName: v.Nickname,
AppMangerLevel: v.AppMangerLevel,
UserID: v.UserID,
FaceURL: v.FaceURL,
NickName: v.Nickname,
}
accounts = append(accounts, temp)
total += 1
@@ -661,7 +640,7 @@ func (s *userServer) NotificationUserInfoUpdate(ctx context.Context, userID stri
wg.Add(len(es))
go func() {
defer wg.Done()
_, es[0] = s.groupClient.NotificationUserInfoUpdate(ctx, &group.NotificationUserInfoUpdateReq{
_, es[0] = s.groupRpcClient.Client.NotificationUserInfoUpdate(ctx, &group.NotificationUserInfoUpdateReq{
UserID: userID,
OldUserInfo: oldUserInfo,
NewUserInfo: newUserInfo,
@@ -670,7 +649,7 @@ func (s *userServer) NotificationUserInfoUpdate(ctx context.Context, userID stri
go func() {
defer wg.Done()
_, es[1] = s.relationClient.NotificationUserInfoUpdate(ctx, &friendpb.NotificationUserInfoUpdateReq{
_, es[1] = s.friendRpcClient.Client.NotificationUserInfoUpdate(ctx, &friendpb.NotificationUserInfoUpdateReq{
UserID: userID,
OldUserInfo: oldUserInfo,
NewUserInfo: newUserInfo,
+67 -53
View File
@@ -16,12 +16,14 @@ package tools
import (
"context"
"fmt"
"os"
"time"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/mw"
@@ -44,7 +46,7 @@ func Start(ctx context.Context, config *CronTaskConfig) error {
if config.CronTask.RetainChatRecords < 1 {
return errs.New("msg destruct time must be greater than 1").Wrap()
}
client, err := kdisc.NewDiscoveryRegister(&config.Discovery, &config.Share, nil)
client, err := kdisc.NewDiscoveryRegister(&config.Discovery, &config.Share)
if err != nil {
return errs.WrapMsg(err, "failed to register discovery service")
}
@@ -56,68 +58,80 @@ func Start(ctx context.Context, config *CronTaskConfig) error {
return err
}
thirdConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Third)
if err != nil {
return err
}
// thirdConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Third)
// if err != nil {
// return err
// }
conversationConn, err := client.GetConn(ctx, config.Share.RpcRegisterName.Conversation)
if err != nil {
return err
}
srv := &cronServer{
ctx: ctx,
config: config,
cron: cron.New(),
msgClient: msg.NewMsgClient(msgConn),
conversationClient: pbconversation.NewConversationClient(conversationConn),
thirdClient: third.NewThirdClient(thirdConn),
msgClient := msg.NewMsgClient(msgConn)
conversationClient := pbconversation.NewConversationClient(conversationConn)
// thirdClient := third.NewThirdClient(thirdConn)
crontab := cron.New()
// scheduled hard delete outdated Msgs in specific time.
clearMsgFunc := func() {
now := time.Now()
deltime := now.Add(-time.Hour * 24 * time.Duration(config.CronTask.RetainChatRecords))
ctx := mcontext.SetOperationID(ctx, fmt.Sprintf("cron_%d_%d", os.Getpid(), deltime.UnixMilli()))
log.ZDebug(ctx, "clear chat records", "deltime", deltime, "timestamp", deltime.UnixMilli())
if _, err := msgClient.ClearMsg(ctx, &msg.ClearMsgReq{Timestamp: deltime.UnixMilli()}); err != nil {
log.ZError(ctx, "cron clear chat records failed", err, "deltime", deltime, "cont", time.Since(now))
return
}
log.ZDebug(ctx, "cron clear chat records success", "deltime", deltime, "cont", time.Since(now))
}
if _, err := crontab.AddFunc(config.CronTask.CronExecuteTime, clearMsgFunc); err != nil {
return errs.Wrap(err)
}
if err := srv.registerClearS3(); err != nil {
return err
// scheduled soft delete outdated Msgs in specific time when user set `is_msg_destruct` feature.
msgDestructFunc := func() {
now := time.Now()
ctx := mcontext.SetOperationID(ctx, fmt.Sprintf("cron_%d_%d", os.Getpid(), now.UnixMilli()))
log.ZDebug(ctx, "msg destruct cron start", "now", now)
conversations, err := conversationClient.GetConversationsNeedDestructMsgs(ctx, &pbconversation.GetConversationsNeedDestructMsgsReq{})
if err != nil {
log.ZError(ctx, "Get conversation need Destruct msgs failed.", err)
return
} else {
_, err := msgClient.DestructMsgs(ctx, &msg.DestructMsgsReq{Conversations: conversations.Conversations})
if err != nil {
log.ZError(ctx, "Destruct Msgs failed.", err)
return
}
}
log.ZDebug(ctx, "msg destruct cron task completed", "cont", time.Since(now))
}
if err := srv.registerDeleteMsg(); err != nil {
return err
}
if err := srv.registerClearUserMsg(); err != nil {
return err
if _, err := crontab.AddFunc(config.CronTask.CronExecuteTime, msgDestructFunc); err != nil {
return errs.Wrap(err)
}
// // scheduled delete outdated file Objects and their datas in specific time.
// deleteObjectFunc := func() {
// now := time.Now()
// deleteTime := now.Add(-time.Hour * 24 * time.Duration(config.CronTask.FileExpireTime))
// ctx := mcontext.SetOperationID(ctx, fmt.Sprintf("cron_%d_%d", os.Getpid(), deleteTime.UnixMilli()))
// log.ZDebug(ctx, "deleteoutDatedData ", "deletetime", deleteTime, "timestamp", deleteTime.UnixMilli())
// if _, err := thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: deleteTime.UnixMilli()}); err != nil {
// log.ZError(ctx, "cron deleteoutDatedData failed", err, "deleteTime", deleteTime, "cont", time.Since(now))
// return
// }
// log.ZDebug(ctx, "cron deleteoutDatedData success", "deltime", deleteTime, "cont", time.Since(now))
// }
// if _, err := crontab.AddFunc(config.CronTask.CronExecuteTime, deleteObjectFunc); err != nil {
// return errs.Wrap(err)
// }
log.ZDebug(ctx, "start cron task", "CronExecuteTime", config.CronTask.CronExecuteTime)
srv.cron.Start()
crontab.Start()
<-ctx.Done()
return nil
}
type cronServer struct {
ctx context.Context
config *CronTaskConfig
cron *cron.Cron
msgClient msg.MsgClient
conversationClient pbconversation.ConversationClient
thirdClient third.ThirdClient
}
func (c *cronServer) registerClearS3() error {
if c.config.CronTask.FileExpireTime <= 0 || len(c.config.CronTask.DeleteObjectType) == 0 {
log.ZInfo(c.ctx, "disable scheduled cleanup of s3", "fileExpireTime", c.config.CronTask.FileExpireTime, "deleteObjectType", c.config.CronTask.DeleteObjectType)
return nil
}
_, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, c.clearS3)
return errs.WrapMsg(err, "failed to register clear s3 cron task")
}
func (c *cronServer) registerDeleteMsg() error {
if c.config.CronTask.RetainChatRecords <= 0 {
log.ZInfo(c.ctx, "disable scheduled cleanup of chat records", "retainChatRecords", c.config.CronTask.RetainChatRecords)
return nil
}
_, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, c.deleteMsg)
return errs.WrapMsg(err, "failed to register delete msg cron task")
}
func (c *cronServer) registerClearUserMsg() error {
_, err := c.cron.AddFunc(c.config.CronTask.CronExecuteTime, c.clearUserMsg)
return errs.WrapMsg(err, "failed to register clear user msg cron task")
}
-63
View File
@@ -1,63 +0,0 @@
package tools
import (
"context"
"github.com/openimsdk/open-im-server/v3/pkg/common/config"
kdisc "github.com/openimsdk/open-im-server/v3/pkg/common/discoveryregister"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/mcontext"
"github.com/openimsdk/tools/mw"
"github.com/robfig/cron/v3"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"testing"
)
func TestName(t *testing.T) {
conf := &config.Discovery{
Enable: config.ETCD,
Etcd: config.Etcd{
RootDirectory: "openim",
Address: []string{"localhost:12379"},
},
}
client, err := kdisc.NewDiscoveryRegister(conf, "source")
if err != nil {
panic(err)
}
client.AddOption(mw.GrpcClient(), grpc.WithTransportCredentials(insecure.NewCredentials()))
ctx := mcontext.SetOpUserID(context.Background(), "imAdmin")
msgConn, err := client.GetConn(ctx, "msg-rpc-service")
if err != nil {
panic(err)
}
thirdConn, err := client.GetConn(ctx, "third-rpc-service")
if err != nil {
panic(err)
}
conversationConn, err := client.GetConn(ctx, "conversation-rpc-service")
if err != nil {
panic(err)
}
srv := &cronServer{
ctx: ctx,
config: &CronTaskConfig{
CronTask: config.CronTask{
RetainChatRecords: 1,
FileExpireTime: 1,
DeleteObjectType: []string{"msg-picture", "msg-file", "msg-voice", "msg-video", "msg-video-snapshot", "sdklog", ""},
},
},
cron: cron.New(),
msgClient: msg.NewMsgClient(msgConn),
conversationClient: pbconversation.NewConversationClient(conversationConn),
thirdClient: third.NewThirdClient(thirdConn),
}
srv.deleteMsg()
//srv.clearS3()
//srv.clearUserMsg()
}
-36
View File
@@ -1,36 +0,0 @@
package tools
import (
"fmt"
"github.com/openimsdk/protocol/msg"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"os"
"time"
)
func (c *cronServer) deleteMsg() {
now := time.Now()
deltime := now.Add(-time.Hour * 24 * time.Duration(c.config.CronTask.RetainChatRecords))
operationID := fmt.Sprintf("cron_msg_%d_%d", os.Getpid(), deltime.UnixMilli())
ctx := mcontext.SetOperationID(c.ctx, operationID)
log.ZDebug(ctx, "Destruct chat records", "deltime", deltime, "timestamp", deltime.UnixMilli())
const (
deleteCount = 10000
deleteLimit = 50
)
var count int
for i := 1; i <= deleteCount; i++ {
ctx := mcontext.SetOperationID(c.ctx, fmt.Sprintf("%s_%d", operationID, i))
resp, err := c.msgClient.DestructMsgs(ctx, &msg.DestructMsgsReq{Timestamp: deltime.UnixMilli(), Limit: deleteLimit})
if err != nil {
log.ZError(ctx, "cron destruct chat records failed", err)
break
}
count += int(resp.Count)
if resp.Count < deleteLimit {
break
}
}
log.ZDebug(ctx, "cron destruct chat records end", "deltime", deltime, "cont", time.Since(now), "count", count)
}
-79
View File
@@ -1,79 +0,0 @@
package tools
import (
"fmt"
"github.com/openimsdk/protocol/third"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"os"
"time"
)
func (c *cronServer) clearS3() {
start := time.Now()
deleteTime := start.Add(-time.Hour * 24 * time.Duration(c.config.CronTask.FileExpireTime))
operationID := fmt.Sprintf("cron_s3_%d_%d", os.Getpid(), deleteTime.UnixMilli())
ctx := mcontext.SetOperationID(c.ctx, operationID)
log.ZDebug(ctx, "deleteoutDatedData", "deletetime", deleteTime, "timestamp", deleteTime.UnixMilli())
const (
deleteCount = 10000
deleteLimit = 100
)
var count int
for i := 1; i <= deleteCount; i++ {
resp, err := c.thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: deleteTime.UnixMilli(), ObjectGroup: c.config.CronTask.DeleteObjectType, Limit: deleteLimit})
if err != nil {
log.ZError(ctx, "cron deleteoutDatedData failed", err)
return
}
count += int(resp.Count)
if resp.Count < deleteLimit {
break
}
}
log.ZDebug(ctx, "cron deleteoutDatedData success", "deltime", deleteTime, "cont", time.Since(start), "count", count)
}
// var req *third.DeleteOutdatedDataReq
// count1, err := ExtractField(ctx, c.thirdClient.DeleteOutdatedData, req, (*third.DeleteOutdatedDataResp).GetCount)
//
// c.thirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{})
// msggateway.GetUsersOnlineStatusCaller.Invoke(ctx, &msggateway.GetUsersOnlineStatusReq{})
//
// var cli ThirdClient
//
// c111, err := cli.DeleteOutdatedData(ctx, 100)
//
// cli.ThirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{})
//
// cli.AuthSign(ctx, &third.AuthSignReq{})
//
// cli.SetAppBadge()
//
//}
//
//func extractField[A, B, C any](ctx context.Context, fn func(ctx context.Context, req *A, opts ...grpc.CallOption) (*B, error), req *A, get func(*B) C) (C, error) {
// resp, err := fn(ctx, req)
// if err != nil {
// var c C
// return c, err
// }
// return get(resp), nil
//}
//
//func ignore(_ any, err error) error {
// return err
//}
//
//type ThirdClient struct {
// third.ThirdClient
//}
//
//func (c *ThirdClient) DeleteOutdatedData(ctx context.Context, expireTime int64) (int32, error) {
// return extractField(ctx, c.ThirdClient.DeleteOutdatedData, &third.DeleteOutdatedDataReq{ExpireTime: expireTime}, (*third.DeleteOutdatedDataResp).GetCount)
//}
//
//func (c *ThirdClient) DeleteOutdatedData1(ctx context.Context, expireTime int64) error {
// return ignore(c.ThirdClient.DeleteOutdatedData(ctx, &third.DeleteOutdatedDataReq{ExpireTime: expireTime}))
//}
-34
View File
@@ -1,34 +0,0 @@
package tools
import (
"fmt"
pbconversation "github.com/openimsdk/protocol/conversation"
"github.com/openimsdk/tools/log"
"github.com/openimsdk/tools/mcontext"
"os"
"time"
)
func (c *cronServer) clearUserMsg() {
now := time.Now()
operationID := fmt.Sprintf("cron_user_msg_%d_%d", os.Getpid(), now.UnixMilli())
ctx := mcontext.SetOperationID(c.ctx, operationID)
log.ZDebug(ctx, "clear user msg cron start")
const (
deleteCount = 10000
deleteLimit = 100
)
var count int
for i := 1; i <= deleteCount; i++ {
resp, err := c.conversationClient.ClearUserConversationMsg(ctx, &pbconversation.ClearUserConversationMsgReq{Timestamp: now.UnixMilli(), Limit: deleteLimit})
if err != nil {
log.ZError(ctx, "ClearUserConversationMsg failed.", err)
return
}
count += int(resp.Count)
if resp.Count < deleteLimit {
break
}
}
log.ZDebug(ctx, "clear user msg cron task completed", "cont", time.Since(now), "count", count)
}
+5
View File
@@ -81,6 +81,11 @@ type TextElem struct {
Content string `json:"content" validate:"required"`
}
type StreamMsgElem struct {
Type string `mapstructure:"type" validate:"required"`
Content string `mapstructure:"content" validate:"required"`
}
type RevokeElem struct {
RevokeMsgClientID string `mapstructure:"revokeMsgClientID" validate:"required"`
}
+2 -6
View File
@@ -55,10 +55,6 @@ func (a *AuthRpcCmd) Exec() error {
func (a *AuthRpcCmd) runE() error {
return startrpc.Start(a.ctx, &a.authConfig.Discovery, &a.authConfig.RpcConfig.Prometheus, a.authConfig.RpcConfig.RPC.ListenIP,
a.authConfig.RpcConfig.RPC.RegisterIP, a.authConfig.RpcConfig.RPC.AutoSetPorts, a.authConfig.RpcConfig.RPC.Ports,
a.Index(), a.authConfig.Share.RpcRegisterName.Auth, &a.authConfig.Share, a.authConfig,
[]string{
a.authConfig.Share.RpcRegisterName.MessageGateway,
},
auth.Start)
a.authConfig.RpcConfig.RPC.RegisterIP, a.authConfig.RpcConfig.RPC.Ports,
a.Index(), a.authConfig.Share.RpcRegisterName.Auth, &a.authConfig.Share, a.authConfig, auth.Start)
}
+2 -4
View File
@@ -57,8 +57,6 @@ func (a *ConversationRpcCmd) Exec() error {
func (a *ConversationRpcCmd) runE() error {
return startrpc.Start(a.ctx, &a.conversationConfig.Discovery, &a.conversationConfig.RpcConfig.Prometheus, a.conversationConfig.RpcConfig.RPC.ListenIP,
a.conversationConfig.RpcConfig.RPC.RegisterIP, a.conversationConfig.RpcConfig.RPC.AutoSetPorts, a.conversationConfig.RpcConfig.RPC.Ports,
a.Index(), a.conversationConfig.Share.RpcRegisterName.Conversation, &a.conversationConfig.Share, a.conversationConfig,
nil,
conversation.Start)
a.conversationConfig.RpcConfig.RPC.RegisterIP, a.conversationConfig.RpcConfig.RPC.Ports,
a.Index(), a.conversationConfig.Share.RpcRegisterName.Conversation, &a.conversationConfig.Share, a.conversationConfig, conversation.Start)
}

Some files were not shown because too many files have changed in this diff Show More