# rpcOpenResource **Repository Path**: qsescdm/rpcOpenResource ## Basic Information - **Project Name**: rpcOpenResource - **Description**: springboot+netty+nacos轻量型rpc脚手架 已整合swagger,undertow,mybatis,redis pull之后 修改相关配置即可run起来,详情readme - **Primary Language**: Java - **License**: MulanPSL-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 4 - **Forks**: 0 - **Created**: 2023-01-14 - **Last Updated**: 2023-03-17 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README This project is created from a GitLab [Project Template](https://docs.gitlab.com/ce/gitlab-basics/create-project.html) Additions and changes to the project can be proposed [on the original project](https://gitlab.com/gitlab-org/project-templates/spring) introduce: This is a lightweight rpc framework whose core is built by springboot+netty+nacos. rpc is suitable for intra-service call, data transmission adopts binary form to save network resources, netty non-blocking io multiplexing throughput is higher, suitable for high concurrency performance requirements in the scenario, nacos as rpc registry service. nacos supports two types of registries, persistent and nonpersistent storage of service information. Non-persistence is directly stored in the memory of nacos service node, and the idea of decentralization is adopted among service nodes, and the service node uses hash fragment to store registration information Persistence uses the Raft protocol to elect the master node, as well as a semi-mechanism to store data on the leader node Zookeeper: Using zk tree structure as data storage, service registration and consumption information is directly stored on zk tree nodes, and the cluster also adopts half mechanism to ensure consistency among service nodes. ZooKeeper actually has more bottlenecks than I imagined 1. The write of ZooKeeper is not scalable. When the registered nodes are fixed, the write performance becomes the bottleneck 2. ZooKeeper does not support inter-equipment room routing. Unlike eureka, ZooKeeper adopts the concept of zone 3. When ZooKeeper has too many nodes, if service nodes change, it needs to notify the machine at the same time. As a result, the network adapter is filled up instantly, and the notification is easy to be repeated. Using undertow to replace tomcat as the web container undertow performs better than tomcat in highly concurrent service scenarios 1. High performance, excellent performance under high concurrency in the pressure test comparison of several similar products. 2, Servlet4.0 support, which provides support for Servlet4.0. 3. Web Socket is fully supported, including JSR-356, to meet the huge number of clients for Web applications. 4, embedded, it does not need a container, only through the api can quickly build a Web server. 5, flexibility, by chain Handler configuration and processing requests, can minimize the on-demand loading module, no need to load redundant functions. 6, lightweight, which is an embedded Web server consisting of two core jar packages. 这是一个轻量级的rpc框架 核心是由springboot+netty+nacos搭建。rpc适用于内部服务间调用,数据传输采用二进制的形式 更节省网络资源,netty非阻塞io 多路复用 吞吐量更高,适用于高并发 性能要求较高的场景下 nacos作为rpc服务注册中心,nacos和zookeeper对比 nacos支持两种方式的注册中心,持久化和非持久化存储服务信息。 非持久直接存储在nacos服务节点的内存中,并且服务节点间采用去中心化的思想,服务节点采用hash分片存储注册信息 持久化使用Raft协议选举master节点,同样采用过半机制将数据存储在leader节点上 Zookeeper:利用zk的树型结构做数据存储,服务注册和消费信息直接存储在zk树形节点上,集群下同样采用过半机制保证服务节点间一致性 ZooKeeper其实比我想象的还要更多瓶颈 1. ZooKeeper写是不可扩展的,当注册节点一定时,写性能会是瓶颈,发布应用时出现排队现象,表现出来的现象就是,应用的启动变得十分缓慢 2. ZooKeeper不支持跨机房的路由,不像eureka,有zone的概念,优先本地路由,当本机房路由,当本机房出现问题时,可以路由到另一个机房 3. ZooKeeper当节点过多时,如果有服务节点变更,需要同时通知机器,会发生“惊群效应”, 瞬间打满网卡,且容易重复通知。 使用undertow代替tomcat作为web容器undertow在高并发业务场景中,性能优于tomcat 1,高性能,在多款同类产品的压测对比中,高并发情况下表现出色。 2,Servlet4.0支持,它提供了对Servlet4.0的支持。 3,Web Socket完全支持,包含JSR-356,用以满足Web应用巨大数量的客户端。 4,内嵌式,它不需要容器,只需要通过api即可快速搭建Web服务器。 5,灵活性,交由链式Handler配置和处理请求,可以最小化按需加载模块,无须加载多余功能。 6,轻量级,它是一个内嵌Web服务器,由两个核心jar包组成。 quickstart attention: First of all, nacos registry should be built. After the completion of the construction, registry addresses should be configured in the configuration files of server and client The system integrates both mysql and redis. If it is not needed, it can be removed by itself. If necessary, the related configuration should be added to the configuration file The server where the server resides needs to open tcp port 8000 to the server where the client resides; otherwise, the client cannot connect to the server successfully (long connection). In local deployment, you need to set up nocas locally or use nocas in the test environment and ensure that port nacos 8848 in the test environment is accessible locally Adjust the number of server core threads based on the server configuration #首先要搭建nacos注册中心 完成搭建之后 要把注册中心地址配置在 server端和client端的配置文件中 application.yml application.yml中配置了核心线程数量 请根据服务器具体核数及内存合理分配资源 #系统同时集成了mysql和redis 如果不需要可自行摘除,需要的话则要把相关配置加到配置文件中 #服务端所在服务器需要向客户端所在服务器开通8000 tcp端口 否则客户端不能成功连接服务端(长连接) #本地部署需要本地搭建nocas,或者使用测试环境nocas 同时要保证测试环境的nacos 8848端口 本地可以访问需要放开端口权限 #根据服务器配置调整服核心线程数量 #注意server端和client端的启动顺序 先启动server端 启动server端会将服务注册到nacos (服务注册) 启动完之后 可以进入nacos可视化界面查看服务注册信息 nacos url: http://localhost:8848/nacos/#/configurationManagement?dataId=&group=&appName=&namespace=&pageSize=&pageNo= username: nacos pwd: nacos 后启动client端 启动client端会从nacos中 读取服务信息(服务发现) client端根据读取的服务端的ip和端口 对服务端发起长连接 启动日志中会有连接成功的log 连接成功之后测试client端test接口 本地debug 可跟踪到具体的调用链 方便您理解rpc的调用原理 当client空闲时 为保证长连接不会断连 client端会发送心跳包给server端 关键字:HEART_BEATS_NUMBER 可修改该参数 控制心跳包日志输出频率 当前为100 代表每发送100次心跳包日志输出一次 DEFAULT_WRITER_IDLE_TIME_SECONDS 可修改该参数 控制心跳发的发送频率 当前为20 代表每 20 秒 client端向server端发送一次心跳 start: nohup java -jar -Xmx1024m -Xms512m -XX:+UseAdaptiveSizePolicy -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8 shuckClientApp.jar >/dev/null 2>&1 --server.port=8088 & nohup java -jar -Xmx1024m -Xms512m -XX:+UseAdaptiveSizePolicy -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8 shuckServerApp.jar >/dev/null 2>&1 --server.port=8087 & 如果您想本地调试 请搭建好本地 nacos mysql redis 如果想简化请从maven配置文件 和 application.yml中摘除mysql和redis openRpc对比微服务: 协议: openRpc: 利用Netty,TCP传输,单一、异步、长连接, 适合数据量小、高并发和服务提供者远远少于消费者的场景 长连接下多路复用 提高系统的吞吐量 feign: 基于Http传输协议,短连接,不适合高并发的访问 每次请求都会经历 连接的创建和销毁会耗费更多的网络资源 负载均衡: openRpc: 随机、轮询、活跃度、Hash一致性引入了权重的概念 负载均衡的算法可以精准到某个服务接口的某个方法 Feign: 只支持N种策略:轮询、随机、ResponseTime加权。 负载均衡算法是Client级别的。 容错策略: openRpc: 支持多种容错策略:failover、failfast、brodecast、forking等,也引入了retry次数、timeout等配置参数。 feign: 利用熔断机制来实现容错的,处理的方式不一样