CrazyAirhead

疯狂的傻瓜,傻瓜也疯狂——傻方能执著,疯狂才专注!

0%

零、注意事项

  • 确保安装的系统为CentOS7

  • 服务器存在多网卡问题。首先通过命令ifconfig禁用多余的网卡,以确保激活的网卡数只有1个。

1
# 查看网卡ifconfig# 禁用指定网卡ifconfig [NIC_NAME] down
  • 网卡多IP问题。在确保服务器只存在一个网卡是激活状态的情况下,通过命令查看网卡对应的IP数,若大于1,那么就需要去掉网卡中指定的IP,采用动态获取IP的方式,具体命令如下:
1
echo $(hostname -I) ip addr flush dev [NIC_NAME] ifdown [NIC_NAME] ifup [NIC_NAME]
  • hostname配置。在安装前用户需要配置hostname到ip的映射

一、基础软件安装

  • 需要的命令工具:
1
yum install -y telnet tar sed dos2unix unzip zip expect curl
  • 检查cp等命令是否有别名alias,避免安装时提示。
1
aliasvi ~/.bashrc# 删除cp rm 的别名,
  • 需要安装的软件
    • MySQL(5.5+),安装Hive时已安装

    • JDK (1.8.0_141以上),安装Hadoop时已安装

    • Python(2.x和3.x都支持),默认包含python2,无需安装,但需要安装pip和matplotlib

1
curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.pypip config set global.index-url https://mirrors.aliyun.com/pypi/simple/python -m pip install matplotlib
  • 安装Nginx
1
sudo yum install -y epel-releasesudo yum -y updatesudo yum install -y nginxsystemctl start nginxsystemctl enable nginx
  • 安装Hadoop2.7.2

  • 安装Hive2.3.3(未找到2.3.3版本,使用的是2.3.9版本)

  • 安装Spark2.0以上版本

        部署Hadoop+Hive+Spark

        确保可以执行如下命令

    1
    hdfs dfs -ls /hive -e "show databases"spark-sql -e "show databases"

二、创建用户

  1. 假设部署用户是hadoop账号(可以不是hadoop用户,但是推荐使用Hadoop的超级用户进行部署,这里只是一个示例)

  2. 在所有需要部署的机器上创建部署用户,用于安装 ,如下命令创建部署用户hadoop

1
sudo useradd hadoop
  1. 因为Linkis的服务是以 sudo -u ${linux-user} 方式来切换引擎,从而执行作业,所以部署用户需要有 sudo 权限,而且是免密的,按下面步骤修改部署用户权限

编辑/etc/sudoers文件:

1
vi /etc/sudoers

在/etc/sudoers文件中添加下面内容:

1
hadoop  ALL=(ALL)  NOPASSWD: NOPASSWD: ALL
  1. 修复文件权限
1
...chown root:root /etc/sudo.conf -Rchown root:root /etc/sudoers.d -R

三、准备安装包

  • 用户可以自行编译或者去 release 页面下载安装包:DSS Release-1.1.1

  • DSS & Linkis 一键安装部署包的层级目录结构如下:

1
├── dss_linkis # 一键部署主目录  ├── bin # 用于一键安装,以及一键启动 DSS + Linkis  ├── conf # 一键部署的参数配置目录  ├── wedatasphere-dss-x.x.x-dist.tar.gz # DSS后端安装包  ├── wedatasphere-dss-web-x.x.x-dist.zip # DSS前端和Linkis前端安装包  ├── wedatasphere-linkis-x.x.x-dist.tar.gz # Linkis后端安装包

四、修改配置

  • 用户需要对 xx/dss_linkis/conf 目录下的 config.shdb.sh 进行修改。

  • 打开 config.sh,按需修改相关配置参数,参数说明如下:

1
2
3
4
5
6
7
8
#################### 一键安装部署的基本配置 ####################### deploy user(部署用户,默认为当前登录用户)deployUser=hadoop### Linkis_VERSIONLINKIS_VERSION=1.1.1### DSS Web(本机安装一般无需修改,但需确认此端口是否占用,若被占用,修改一个可用端口即可)DSS_NGINX_IP=127.0.0.1DSS_WEB_PORT=8085### DSS VERSIONDSS_VERSION=1.1.1############## linkis的其他默认配置信息 start ############## ### Specifies the user workspace, which is used to store the user's script files and log files.### Generally local directory##file:// required. 指定用户使用的目录路径,一般用于存储用户的脚本文件和日志文件等,是用户的工作空间WORKSPACE_USER_ROOT_PATH=file:///tmp/linkis/ ### User's root hdfs path##hdfs:// required. 结果集日志等文件路径,用于存储Job的结果集文件HDFS_USER_ROOT_PATH=hdfs:///tmp/linkis ### Path to store job ResultSet:file or hdfs path##hdfs:// required. 结果集日志等文件路径,用于存储Job的结果集文件,如果未配置 使用HDFS_USER_ROOT_PATH的配置RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis ### Path to store started engines and engine logs, must be local. 存放执行引擎的工作路径,需要部署用户有写权限的本地目录ENGINECONN_ROOT_PATH=/appcom/tmp### 基础组件环境信息###HADOOP CONF DIR #/appcom/config/hadoop-config(用户根据实际情况修改)
HADOOP_CONF_DIR=/appcom/config/hadoop-config###HIVE CONF DIR #/appcom/config/hive-config(用户根据实际情况修改)
HIVE_CONF_DIR=/appcom/config/hive-config###SPARK CONF DIR #/appcom/config/spark-config(用户根据实际情况修改)
SPARK_CONF_DIR=/appcom/config/spark-config###for install (用户根据实际情况修改)LINKIS_PUBLIC_MODULE=lib/linkis-commons/public-module##YARN REST URL spark engine required(根据实际情况修改IP和端口)
YARN_RESTFUL_URL=http://127.0.0.1:8088## Engine version#SPARK_VERSION(根据实际版本情况修改版本号)
SPARK_VERSION=2.4.3##HIVE_VERSION(根据实际版本情况修改版本号)
HIVE_VERSION=2.3.9##PYTHON_VERSION(根据实际版本情况修改版本号)
PYTHON_VERSION=python2## LDAP is for enterprise authorization, if you just want to have a try, ignore it.#LDAP_URL=ldap://localhost:1389/#LDAP_BASEDN=dc=webank,dc=com#LDAP_USER_NAME_FORMAT=cn=%s@xxx.com,OU=xxx,DC=xxx,DC=com############## linkis的其他默认配置信息 end ################################# The install Configuration of all Linkis's Micro-Services ######################################## 用户可以根据实际情况修改IP和端口 ##################### NOTICE:# 1. If you just wanna try, the following micro-service configuration can be set without any settings.# These services will be installed by default on this machine.# 2. In order to get the most complete enterprise-level features, we strongly recommend that you install# the following microservice parameters#### EUREKA install information### You can access it in your browser at the address below:http://${EUREKA_INSTALL_IP}:${EUREKA_PORT}### Microservices Service Registration Discovery CenterLINKIS_EUREKA_INSTALL_IP=127.0.0.1LINKIS_EUREKA_PORT=9600#LINKIS_EUREKA_PREFER_IP=true### Gateway install information#LINKIS_GATEWAY_INSTALL_IP=127.0.0.1LINKIS_GATEWAY_PORT=9001### ApplicationManager#LINKIS_MANAGER_INSTALL_IP=127.0.0.1LINKIS_MANAGER_PORT=9101### EngineManager#LINKIS_ENGINECONNMANAGER_INSTALL_IP=127.0.0.1LINKIS_ENGINECONNMANAGER_PORT=9102### EnginePluginServer#LINKIS_ENGINECONN_PLUGIN_SERVER_INSTALL_IP=127.0.0.1LINKIS_ENGINECONN_PLUGIN_SERVER_PORT=9103### LinkisEntrance#LINKIS_ENTRANCE_INSTALL_IP=127.0.0.1LINKIS_ENTRANCE_PORT=9104### publicservice#LINKIS_PUBLICSERVICE_INSTALL_IP=127.0.0.1LINKIS_PUBLICSERVICE_PORT=9105### cs#LINKIS_CS_INSTALL_IP=127.0.0.1LINKIS_CS_PORT=9108########## Linkis微服务配置完毕 ############################# The install Configuration of all DataSphereStudio's Micro-Services ######################################### 非注释的参数必须配置,注释掉的参数可按需修改 #################### # NOTICE:# 1. If you just wanna try, the following micro-service configuration can be set without any settings.# These services will be installed by default on this machine.# 2. In order to get the most complete enterprise-level features, we strongly recommend that you install# the following microservice parameters## 用于存储发布到 Schedulis 的临时ZIP包文件WDS_SCHEDULER_PATH=file:///appcom/tmp/wds/scheduler### DSS_SERVER### This service is used to provide dss-server capability.### project-server#DSS_FRAMEWORK_PROJECT_SERVER_INSTALL_IP=127.0.0.1#DSS_FRAMEWORK_PROJECT_SERVER_PORT=9002### orchestrator-server#DSS_FRAMEWORK_ORCHESTRATOR_SERVER_INSTALL_IP=127.0.0.1#DSS_FRAMEWORK_ORCHESTRATOR_SERVER_PORT=9003### apiservice-server#DSS_APISERVICE_SERVER_INSTALL_IP=127.0.0.1#DSS_APISERVICE_SERVER_PORT=9004### dss-workflow-server#DSS_WORKFLOW_SERVER_INSTALL_IP=127.0.0.1#DSS_WORKFLOW_SERVER_PORT=9005### dss-flow-execution-server#DSS_FLOW_EXECUTION_SERVER_INSTALL_IP=127.0.0.1#DSS_FLOW_EXECUTION_SERVER_PORT=9006###dss-scriptis-server#DSS_SCRIPTIS_SERVER_INSTALL_IP=127.0.0.1#DSS_SCRIPTIS_SERVER_PORT=9008########## DSS微服务配置完毕################### other default configuration 其他默认配置信息 ############## ## java application default jvm memory(Java应用的堆栈大小。如果部署机器的内存少于8G,推荐128M;## 达到16G时,推荐至少256M;如果想拥有非常良好的用户使用体验,推荐部署机器的内存至少达到32G)export SERVER_HEAP_SIZE="128M"##sendemail配置,只影响DSS工作流中发邮件功能EMAIL_HOST=smtp.163.comEMAIL_PORT=25EMAIL_USERNAME=xxx@163.comEMAIL_PASSWORD=xxxxxEMAIL_PROTOCOL=smtp### Save the file path exported by the orchestrator serviceORCHESTRATOR_FILE_PATH=/appcom/tmp/dss### Save DSS flow execution service log pathEXECUTION_LOG_PATH=/appcom/tmp/dss############## other default configuration 其他默认配置信息 ##############
  • 请注意:DSS 推荐使用 LDAP 进行用户登录鉴权,如您想接入公司的 LDAP,还需在上面的 config.sh 中填写 LDAP 的配置参数。 如何安装 LDAP?

  • 修改数据库配置。请确保配置的数据库,安装机器可以正常访问,否则将会出现 DDL 和 DML 导入失败的错误,打开 db.sh,按需修改相关配置参数,参数说明如下:

1
配置DSS数据库MYSQL_HOST=127.0.0.1MYSQL_PORT=3306MYSQL_DB=dssMYSQL_USER=xxxMYSQL_PASSWORD=xxx## Hive metastore的数据库配置,用于Linkis访问Hive的元数据信息HIVE_HOST=127.0.0.1HIVE_PORT=3306HIVE_DB=xxxHIVE_USER=xxxHIVE_PASSWORD=xxx

五、安装和使用

  1. 停止机器上所有DSS及Linkis服务
  • 若从未安装过DSS及Linkis服务,忽略此步骤
  1. 将当前目录切换到bin目录
1
cd xx/dss_linkis/bin
  1. 执行安装脚本

    1
    sh install.sh
    1. 该安装脚本会检查各项集成环境命令,如果没有请按照提示进行安装,以下命令为必须项:

    2. yum; java; mysql; unzip; expect; telnet; tar; sed; dos2unix; nginx

    3. 安装时,脚本会询问您是否需要初始化数据库并导入元数据,Linkis 和 DSS 均会询问,第一次安装必须选是(2)

    4. 通过查看控制台打印的日志信息查看是否安装成功,如果有错误信息,可以查看具体报错原因

    5. 除非用户想重新安装整个应用,否则该命令执行一次即可

  2. 启动服务

  • 若用户的Linkis安装包是通过自己编译获取且用户想启用数据源管理功能,那么就需要去修改配置以启动该项功能,使用下载的安装包无需操作
1
## 切换到Linkis配置文件目录cd xx/dss_linkis/linkis/conf## 打开配置文件linkis-env.shvi linkis-env.sh## 将如下配置改为trueexport ENABLE_METADATA_MANAGER=true
  • 若用户的Linkis安装包是通过自己编译获取,在启动服务前尽量将后续用到的密码改成和部署用户名一致,使用下载的安装包无需操作
1
## 切换到Linkis配置文件目录cd xx/dss_linkis/linkis/conf/## 打开配置文件linkis-mg-gateway.propertiesvi linkis-mg-gateway.properties## 修改密码wds.linkis.admin.password=hadoop
  • 在xx/dss_linkis/bin目录下执行启动服务脚本
1
sh start-all.sh
  • 如果启动产生了错误信息,可以查看具体报错原因。启动后,各项微服务都会进行通信检测,如果有异常则可以帮助用户定位异常日志和原因
  1. 安装默认Appconn
1
# 切换目录到dss,正常情况下dss目录就在xx/dss_linkis目录下,cd xx/dss_linkis/dss/bin# 执行启动默认Appconn脚本sh install-default-appconn.sh
  • 该命令执行一次即可,除非用户想重新安装整个应用
  1. 查看验证是否成功
  • 用户可以在Eureka界面查看 Linkis & DSS 后台各微服务的启动情况,默认情况下DSS有6个微服务,Linkis有6个微服务 (Eureka地址默认端口号9600,服务完整启动时可以看到地址)

  • 用户可以使用谷歌浏览器访问以下前端地址:http://DSS_NGINX_IP:DSS_WEB_PORT 启动日志会打印此访问地址(在xx/dss_linkis/conf/config.sh中也配置了此地址)。登陆时默认管理员的用户名和密码均为部署用户为hadoop(用户若想修改密码,可以通过修改 xx/dss_linkis/linkis/conf/linkis-mg-gateway.properties 文件中的 wds.linkis.admin.password 参数)
  1. 停止服务
1
sh stop-all.sh
  • 若用户需要停止所有服务可执行该命令sh stop-all.sh,重新启动所有服务就执行sh start-all.sh,这两条命令均在xx/dss_linkis/bin目录下执行

六、补充说明

  • 考虑到安装包过于大的问题,Linkis默认仅提供Hive, Python, Shell, Spark引擎插件,用户若想使用其他引擎,可参考文档: Linkis引擎的安装

  • DSS默认未安装调度系统,用户可以选择安装 Schedulis 或者 DolphinScheduler,具体安装方式见下面表格

  • DSS默认仅安装DateChecker, EventSender, EventReceiver AppConn,用户可参考文档安装其他AppConn,如Visualis, Exchangis, Qualitis, Prophecis, Streamis。调度系统可使用Schedulis或DolphinScheduler

组件名 组件版本要求 组件部署链接 AppConn部署链接
Schedulis Schedulis0.7.0 Schedulis部署 Schedulis AppConn安装
Visualis Visualis1.0.0 Visualis部署 Visualis AppConn安装
Exchangis Exchangis1.0.0 Exchangis部署 Exchangis AppConn安装
Qualitis Qualitis0.9.2 Qualitis部署 Qualitis AppConn安装
Prophecis Prophecis0.3.2 Prophecis部署 Prophecis AppConn安装
Streamis Streamis0.2.0 Streamis部署 Streamis AppConn安装
DolphinScheduler DolphinScheduler1.3.x DolphinScheduler部署 DolphinScheduler AppConn安装

七、问题处理

  • 部署时提示上传资源错误

可以查看xx/dss-links/links/logs/links-ps-publishservice.log看具体的错误信息,来确定具体的错误

八、参考链接

说明

需要部署大数据治理平台,查看了一些开源版本之后发现微众银行的DataSphereStudio(DSS)比较符合我们的预期,于是着手部署该数据平台。

因为DSS默认支持的是Hive2.3.3,但官网没找到对应版本,使用的是2.3.9

参考安装文档GettingStarted - Apache Hive - Apache Software Foundation

基础软件

  • CentOS7

  • Hadoop2.7.2

  • MySQL5.6+

基础配置

  • 安装MySQL,下载的是5.7.40版本
1
2
3
4
5
6
7
8
9
10
11
12
13
yum installm -y ./*.rpm

systemctl start mysqld && systemctl enable mysqld


grep 'temporary password' /var/log/mysqld.log


mysql -uroot -ppassword

alter user root@localhost identified by 'Hadoop.2023';
flush privileges;
exit
  • 配置

  • 解压

1
2
tar -zxvf apache-hive-2.3.9-bin.tar.gz
mv apache-hive-2.3.9-bin /home/hive
  • 增加PATH
1
2
3
4
5
6
vi ~/.bashrc

epxort HIVE_HOME=/home/hive
export PATH=$PATH:/home/hadoop/bin:$HIVE_HOME/bin

source ~/.bashrc
  • 创建hdfs的hive目录
1
2
3
4
hdfs dfs -mkdir -p /user/hive
hdfs dfs -chmod -R 777 /user/hive
hdfs dfs -mkdir -p /tmp/hive
hdfs dfs -chmod -R 777 /tmp/hive
  • 配置hive本地临时目录
1
2
mkdir /home/tmp/hive
chmod -R 777 /home/tmp/hive
  • 配置hive-site.xml
1
2
3
cd /home/hive/conf

cp hive-default.xml.template hive-site.xml
  •  使用vim模式替换
1
2
3
4
5
# 替换临时目录
:%s#${system:java.io.tmpdir}#/home/temp/hive#g

# 替换目录用户
:%s#${system:user.name}#root#g
  • 修改hive数据库配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<property>
<name>javax.jdo.option.ConnectionDriverName</nae>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://172.18.23.219:3306/hive?createDatabaseIfNoExist=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>Hadoop.2023</value>
<description>password to use against metastore database</description>
</property>
  • 下载jdbc驱动,上传到/home/hive/lib目录下。
1
2
yum install -y wget 
wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.48/mysql-connector-java-5.1.48.jar

运行

  • 初始化
1
2
3
4
5
cd /home/hive/bin
schematool -initSchema -dbType mysql


# 如果没有自动创建hive数据库,需要自己手动创建,后重新执行上述语句。
  • 启动Hive
1
./hive

说明

需要部署大数据治理平台,查看了一些开源版本之后发现微众银行的DataSphereStudio(DSS)比较符合我们的预期,于是着手部署该数据平台。

因为DSS默认支持的Spark2.0以上版本,于是本次的安装也用3.3.2-hadoop2版本。

参考安装文档Running Spark on YARN - Spark 2.4.3 Documentation (apache.org)

基础软件

1
yum install -y scala-2.12.17.rpm

基础配置

  • 设置

配置

  • 解压
1
2
tar -zxvf spark-3.3.2-bin-hadoop2.tgz
mv spark-3.3.2-bin-hadoop2 /home/spark
  • 配置spark-env.sh
1
2
3
4
5
6
7
# Options read in any cluster manager using HDFS
# - HADOOP_CONF_DIR, to point Spark towards Hadoop configuration files
HADOOP_CONF_DIR=/home/bigdata/hadoop/etc/hadoop

# Options read in YARN client/cluster mode
# - YARN_CONF_DIR, to point Spark towards YARN configuration files when you use YARN
YARN_CONF_DIR=/home/bigdata/hadoop/etc/hadoo
  • 编辑/home/hadoop/etc/hadoop/yarn-site.xml
1
2
3
4
5
6
7
8
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>

运行

  • 启动dfs
1
start-dfs.sh
  • 启动yarn
1
start-yarn.sh
  • 运行sprk-shell
1
spark-shell --master yarn --deploy-mode client
  • 验证yarn
1
http://hadoop0:8088/cluster

说明

需要部署大数据治理平台,查看了一些开源版本之后发现微众银行的DataSphereStudio(DSS)比较符合我们的预期,于是着手部署该数据平台。

因为DSS默认支持的是Hadoop2.7.2,于是本次的安装也用2.7.2版本。

参考安装文档Apache Hadoop 2.7.2 – Hadoop: Setting up a Single Node Cluster.

阅读全文 »

安装

参考前文系统升级TDengine3.0安装TDegine。

创建原始数据表

虽然程序中支持使用动态创建原始数据表,但在创建流计算时需要原始数据表的信息,因此先创建,此处省略其他字段。

阅读全文 »

说明

公司所在机房时不时断电,且因为机房没配置UPS,于是考虑用自己家里的NAS做一个GitLab仓库的异地备份。

之前其实已经在物理机(CentOS7系统)上通过部署Gogs,通过新增镜像仓库的方式实现了代码的异地备份。但Linux的VPN客户端不稳定的,即便配置了自动重连,但无法断线重连,导致无法正常同步。于是使用oVirt 安装了Windows虚拟机,并使用SoftEther Vpn Client配置好公司的VPN网络,需要配置开机自动启动和断线自动连接。

考虑到之前部署Gogs时只能一个个新增镜像仓库,也是花了不少时间。这次重新部署,直接使用Git命令(不依赖Gogs)、自动化脚本和Windows定时任务的方式进行同步备份。

配置

SSH Keys

因为涉及到多个项目的同步,配置上SSH Keys,可以减少配置和增加安全性。

  1. 生成ssh公私钥,使用下面的脚本,都使用默认,敲回车即可。

    1
    ssh-keygen -o -t rsa -b 4096 -C "l4qiang@gmail.com"
  2. 复制/c/Users/v/.ssh/id_rsa/id_rsa.pub内容

  3. 添加用户的SSH Keys,点击用户头像,settings->SSH Keys,将复制的公钥贴入Key的文本框中,点击Add Key按钮。

  4. 测试

    1
    ssh -T git@xxx.com

Git镜像仓库与同步

主要使用两个git命令:

1
2
3
4
5
6
# 创建镜像
git clone --mirror git@xxx.com/l4qiang/aaa.git


# 同步
git --git-dir=aaa.git remote update

脚本

编写脚本并命名为sync.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
gitpath_prefix=git@xxx.com:
gitpath_name=(l4qiang/aaa.git l4qiang/bbb.git)
rep_path=/c/Repository
for name in "${gitpath_name[@]}";
do
cd $rep_path
user_name=${name%/*}
echo $user_name
project_name=${name##*/}
echo $project_name
if [ -d "$name" ]; then
echo "正在备份,同步……"$name

cd $user_name
git --git-dir=$project_name remote update
else
echo "正在备份,创建……"$name

mkdir -p $name
cd $user_name
git clone --mirror $gitpath_prefix$name
fi
done

定时任务

打开Task Scheduler,选择Action->Create Basic Task,按格式填写对应的任务信息。

主要是Actions部分,填写如下:

1
"C:\Program Files\Git\bin\bash.exe" /c/Repository/sync.sh

原文 HashiCorp’s go-plugin extensive tutorial

Intro

If you don’t know what go-plugin is, don’t worry, here is a small introduction on the subject matter:

Back in the old days when Go didn’t have the plugin package, HashiCorp was desperately looking for a way to use plugins.

In the old days, Lua plus Go wasn’t really a thing yet, and to be honest, nobody wants to write Lua ( joking!).

And thus Mitchell had this brilliant idea of using RPC over the local network to serve a local interface as something that could easily be implemented with any other language that supported RPC. This sounds convoluted but has many benefits! For example, your code will never crash because of a plugin and the ability to use any language to implement a plugin. Not just Go.

It has been a battle-hardened solution for years now and is being actively used by Terraform, Vault, Consule, and especially Packer. All using go-plugin in order to provide a much needed flexibility. Writing a plugin is easy. Or so they say.

It can get complicated quickly, for example, if you are trying to use GRPC. You can lose sight of what exactly you’ll need to implement, where and why; or utilizing various languages or using go-plugins in your own project and extending your CLI with pluggable components.

These are all nothing to sneeze at. Suddenly you’ll find yourself with hundreds of lines of code pasted from various examples and yet nothing works. Or worse, it DOES work but you have no idea how. Then you find yourself needing to extend it with a new capability, or you find an elusive bug and can’t trace its origins.

Fear not. I’ll try to demystify things and draw a clear picture about how it works and how the pieces fit together.

Let’s start at the beginning.

Basic plugin

Let’s start by writing a simple Go GRPC plugin. In fact, we can go through the basic example in the go-plugin’s repository which can be quite confusing when first starting out. We’ll go step-by-step, and the switch to GRPC will be much easier!

Basic concepts

Server

In the case of plugins, the Server is the one serving the plugin’s implementation. This means the server will have to provide the implementation to an interface.

Client

The Client calls the server in order to execute the desired behavior. The underlying logic will connect to the server running on localhost on a random higher port, call the wanted function’s implementation and wait for a response. Once the response is received provide that back to the calling Client.

Implementation

The main function

Logger

The plugins defined here use stdout in a special way. If you aren’t writing a Go based plugin, you will have to do that yourself by outputting something like this:

1
1|1|tcp|127.0.0.1:1234|grpc 

We’ll come back to this later. Suffice to say the framework will pick this up and will connect to the plugin based on the output. In order to get some output back, we must define a special logger:

// Create an hclog.Logger logger := hclog.New(&hclog.LoggerOptions{ Name: "plugin", Output: os.Stdout, Level: hclog.Debug, })

NewClient

1
2
3
4
5
6
 // We're a host! Start by launching the plugin process. client := plugin.NewClient(&plugin.ClientConfig{
HandshakeConfig: handshakeConfig,
Plugins: pluginMap,
Cmd: exec.Command("./plugin/greeter"),
Logger: logger,
}) defer client.Kill()

What is happening here? Let’s see one by one: HandshakeConfig: handshakeConfig: This part is the handshake configuration of the plugin. It has a nice comment as well.

1
2
3
4
5
 // handshakeConfigs are used to just do a basic handshake between // a plugin and host. If the handshake fails, a user friendly error is shown. // This prevents users from executing bad plugins or executing a plugin // directory. It is a UX feature, not a security feature. var handshakeConfig = plugin.HandshakeConfig{
ProtocolVersion: 1,
MagicCookieKey: "BASIC_PLUGIN",
MagicCookieValue: "hello",
}

The ProtocolVersion here is used in order to maintain compatibility with your current plugin versions. It’s basically like an API version. If you increase this, you will have two options. Don’t accept lower protocol versions nor switch to the version number and use a different client implementation for a lower version than for a higher version. This way you will maintain backwards compatibility. The MagicCookieKey and MagicCookieValue are used for a basic handshake which the comment is talking about. You have to set this ONCE for your application. Never change it again, for if you do, your plugins will no longer work. For uniqueness sake, I suggest using UUID. Cmd is one of the most important parts about a plugin. Basically how plugins work is that they boil down to a compiled binary which is executed and starts an RPC server. This is where you will have to define the binary which will be executed and does all this. Since this is all happening locally, (please keep in mind that Go-plugins only support localhost, and for a good reason), these binaries will most likely sit next to your application’s binary or in a pre-configured global location. Something like: ~/.config/my-app/plugins. This is individual for each plugin of course. The plugins can be autoloaded via a discovery function given a path and a glob. And last but not least is the Plugins map. This map is used in order to identify a plugin called Dispense. This map is globally available and must stay consistent in order for all the plugins to work:

1
// pluginMap is the map of plugins we can dispense. var pluginMap = map[string]plugin.Pluglin   "greeter": &example.GreeterPlugin{}, } 

You can see that the key is the name of the plugin and the value is the plugin. We then proceed to create an RPC client:

1
2
3
 // Connect via RPC rpcClient, err := client.Client() if err != nil {
log.Fatal(err)
}

Nothing fancy about this one… Now comes the interesting part:

1
2
3
 // Request the plugin raw, err := rpcClient.Dispense("greeter") if err != nil {
log.Fatal(err)
}

What’s happening here? Dispense will look in the above created map and search for the plugin. If it cannot find it, it will throw an error at us. If it does find it, it will cast this plugin to an RPC or a GRPC type plugin. Then proceed to create an RPC or a GRPC client out of it. There is no call yet. This is just creating a client and parsing it to a respective representation. Now comes the magic:

1
// We should have a Greeter now! This feels like a normal interface // implementation but is in fact over an RPC connection. greeter := raw.(example.Greeter) fmt.Println(greeter.Greet()) 

Here we are type asserting our raw GRPC client into our own plugin type. This is so we can call the respective function on the plugin! Once that’s done we will have a {client,struct,implementation} that can be called like a simple function. The implementation right now comes from greeter_impl.go, but that will change once protoc makes an appearance. Behind the scenes, go-plugin will do a bunch of hat tricks with multiplexing TCP connections as well as a remote procedure call to our plugin. Our plugin then will run the function, generate some kind of output, and will then send that back for the waiting client. The client will then proceed to parse the message into a given response type and will then return it back to the client’s callee. This concludes main.go for now.

The Interface

Now let’s investigate the Interface. The interface is used to provide calling details. This interface will be what defines our plugins’ capabilities. How does our look like?

1
2
3
 // Greeter is the interface that we're exposing as a plugin. type Greeter interface {
Greet() string
}

This is pretty simple. It defines a function which will return a string typed value. Now, we will need a couple of things for this to work. Firstly we need something which defines the RPC workings. go-plugin is working with net/http inside. It also uses something called Yamux for connection multiplexing, but we needn’t worry about this detail. Implementing the RPC details looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
 // Here is an implementation that talks over RPC type GreeterRPC struct {
client *rpc.Client
}
func (g *GreeterRPC) Greet() string {
var resp string
err := g.client.Call("Plugin.Greet", new(interface{}), &resp)
if err != nil {
// You usually want your interfaces to return errors. If they don't,
// there isn't much other choice here.
panic(err)
}

return resp
}

Here the GreeterRPC struct is an RPC specific implementation that will handle communication over RPC. This is Client in this setup. In case of gRPC, this would look something like this:

1
2
3
4
5
 // GRPCClient is an implementation of KV that talks over RPC. type GreeterGRPC struct{ client proto.GreeterClient }
func (m *GreeterGRPC) Greet() (string, error) {
s, err := m.client.Greet(context.Background(), &proto.Empty{})
return s, err
}

What is happening here? What’s Proto and what is GreeterClient? GRPC uses Google’s protoc library to serialize and unserialize data.proto.GreeterClient is generated Go code by protoc. This code is a skeleton for which implementation detail will be replaced on run time. Well, the actual result will be used and not replaced as such. Back to our previous example. The RPC client calls a specific Plugin function called Greet for which the implementation will be provided by a Server that will be streamed back over the RPC protocol. The server is pretty easy to follow:

1
2
3
4
 // Here is the RPC server that GreeterRPC talks to, conforming to // the requirements of net/rpc type GreeterRPCServer struct {
// This is the real implementation
Impl Greeter
}

Impl is the concrete implementation that will be called in the Server’s implementation of the Greet plugin. Now we must define Greet on the RPCServer in order for it to be able to call the remote code. This looks like this:

1
2
3
4
 func (s *GreeterRPCServer) Greet(args interface{}, resp *string) error {
*resp = s.Impl.Greet()
return nil
}

This is all still boilerplate for the RPC works. Now comes plugin. For this, the comment is actually quite good too:

1
2
3
4
5
6
7
8
9
10
 // This is the implementation of plugin.Plugin so we can serve/consume this // // This has two methods: Server must return an RPC server for this plugin // type. We construct a GreeterRPCServer for this. // // Client must return an implementation of our interface that communicates // over an RPC client. We return GreeterRPC for this. // // Ignore MuxBroker. That is used to create more multiplexed streams on our // plugin connection and is a more advanced use case. type GreeterPlugin struct {
// Impl Injection
Impl Greeter
}
func (p *GreeterPlugin) Server(*plugin.MuxBroker) (interface{}, error) {
return &GreeterRPCServer{Impl: p.Impl}, nil
}
func (GreeterPlugin) Client(b *plugin.MuxBroker, c *rpc.Client) (interface{}, error) {
return &GreeterRPC{client: c}, nil
}

What does this mean? So, remember: GreeterRPCServer is the one calling the actual implementation while Client is receiving the result of that call. The GreeterPlugin has the Greeter interface embedded just like the RPCServer. We will use the GreeterPlugin as a struct in the plugin map. This is the plugin that we will actually use. This is all still common stuff. These things will need to be visible for both. The plugin’s implementation will use the interface to see what it needs to implement. The Client will use it see what to call and what API is available. Like, Greet. How does the implementation look like?

The Implementation

In a completely separate package, but which still has access to the interface definition, this plugin could be something like this:

1
2
3
4
5
6
7
 // Here is a real implementation of Greeter type GreeterHello struct {
logger hclog.Logger
}
func (g *GreeterHello) Greet() string {
g.logger.Debug("message from GreeterHello.Greet")
return "Hello!"
}

We create a struct and then add the function to it which is defined by the plugin’s interface. This interface, since it’s required by both parties, could well sit in a common package outside of both programs. Something like a SDK. Both code could import it and use it as a common dependency. This way we have separated the interface from the plugin and the calling client. The main function could look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 logger := hclog.New(&hclog.LoggerOptions{
Level: hclog.Trace,
Output: os.Stderr,
JSONFormat: true,
})
greeter := &GreeterHello{
logger: logger,
} // pluginMap is the map of plugins we can dispense. var pluginMap = map[string]plugin.Plugin{
"greeter": &example.GreeterPlugin{Impl: greeter},
}
logger.Debug("message from plugin", "foo", "bar")
plugin.Serve(&plugin.ServeConfig{
HandshakeConfig: handshakeConfig,
Plugins: pluginMap,
})

Notice two things that we need. One is the handshakeConfig. You can either define it here, with the same cookie details as you defined in the client code, or you can extract the handshake information into the SDK. This is up to you. Then the next interesting thing is the plugin.Serve method. This is where the magic happens. The plugins open up a RPC communication socket and over a hijacked stdout, broadcasts its availability to the calling Client in this format:

CORE-PROTOCOL-VERSION | APP-PROTOCOL-VERSION | NETWORK-TYPE | NETWORK-ADDR | PROTOCOL

For Go plugins, you don’t have to concern yourself with this. go-plugin takes care of all this for you. For non-Go versions, we must take this into account. And before calling serve, we need to output this information to stdout. For example, a Python plugin must deal with this himself. Like this:

1
# Output information print("1|1|tcp|127.0.0.1:1234|grpc") sys.stdout.flush() 

For GRPC plugins, it’s also mandatory to implement a HealthChecker. How would all this look like with GRPC? It gets slightly more complicated but not too much. We need to use protoc to create a protocol description for our implementation, and then we will call that. Let’s look at this now by converting the basic greeter example into GRPC.

GRPC Basic plugin

The example that’s under GRPC is quite elaborate and perhaps you don’t need the Python part. I will focus on the basic RPC example into a GRPC example. That should not be a problem.

The API

First and foremost, you will need to define an API to implement with protoc. For our basic example, the protoc file could look like this:

1
2
3
4
5
6
7
8
 syntax = "proto3"; package proto;
message GreetResponse {
string message = 1;
}
message Empty {}
service GreeterService {
rpc Greet(Empty) returns (GreetResponse);
}

The syntax is quite simple and readable. What this defines is a message, which is a response, that will contain a message with the type string. The service defines a service which has a method called Greet. The service definition is basically an interface for which we will be providing the concrete implementation through the plugin. To read more about protoc, visit this page: Google Protocol Buffer.

Generate the code

Now, with the protoc definition in hand, we need to generate the stubs that the local client implementation can call. That client call will then, through the remote procedure call, call the right function on the server which will have the concrete implementation at the ready. Run it and return the result in the specified format. Because the stub needs to be available by both parties, (the client AND the server), this needs to live in a shared location. Why? Because the client is calling the stub and the server is implementing the stub. Both need it in order to know what to call/implement. To generate the code, run this command:

1
protoc -I proto/ proto/greeter.proto --go_out=plugins=grpc:proto 

I encourage you to read the generated code. Much will make little sense at first. It will have a bunch of structs and defined things that the GRPC package will use in order to server the function. The interesting bits and pieces are:

1
2
3
4
5
6
 func (m *GreetResponse) GetMessage() string {
if m != nil {
return m.Message
}
return ""
}

Which will get use the message inside the response.

1
2
3
 type GreeterServiceClient interface {
Greet(ctx context.Context, in *Empty, opts ...grpc.CallOption) (*GreetResponse, error)
}

This is our ServiceClient interface which defines the Greet function’s topology. And lastly, this guy:

1
2
3
 func RegisterGreeterServiceServer(s *grpc.Server, srv GreeterServiceServer) {
s.RegisterService(&_GreeterService_serviceDesc, srv)
}

Which we will need in order to register our implementation for the server. We can ignore the rest. ## The interface Much like the RPC, we need to define an interface for the client and server to use. This must be in a shared place as both the server and the client need to know about it. You could put this into an SDK and your peers could just get the SDK and implement some function for define and done. The interface definition in the GRPC land could look something like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
 // Greeter is the interface that we're exposing as a plugin. type Greeter interface {
Greet() string
}
// This is the implementation of plugin.GRPCPlugin so we can serve/consume this. type GreeterGRPCPlugin struct {
// GRPCPlugin must still implement the Plugin interface
plugin.Plugin
// Concrete implementation, written in Go. This is only used for plugins
// that are written in Go.
Impl Greeter
}
func (p *GreeterGRPCPlugin) GRPCServer(broker *plugin.GRPCBroker, s *grpc.Server) error {
proto.RegisterGreeterServer(s, &GRPCServer{Impl: p.Impl})
return nil
}
func (p *GreeterGRPCPlugin) GRPCClient(ctx context.Context, broker *plugin.GRPCBroker, c *grpc.ClientConn) (interface{}, error) {
return &GRPCClient{client: proto.NewGreeterClient(c)}, nil
}

With this we have the Plugin’s implementation for hashicorp what needed to be done. The plugin will call the underlying implementation and serve/consume the plugin. We can now write the GRPC part of it. Please note that proto is a shared library too where the protocol stubs reside. That needs to be somewhere on the path or in a separate SDK of some sort, but it must be visible.

Writing the GRPC Client

Firstly we define the grpc client struct:

1
// GRPCClient is an implementation of Greeter that talks over RPC. type GRPCClient struct{ client proto.GreeterClient } 

Then we define how the client will call the remote function:

1
2
3
4
 func (m *GRPCClient) Greet() string {
ret, _ := m.client.Greet(context.Background(), &proto.Empty{})
return ret.Message
}

This will take the client in the GRPCClient and will call the method on it. Once that’s done we will return to the result Message property which will be Hello!proto.Empty is an empty struct; we use this if there is no parameter for a defined method or no return value. We can’t just leave it blank. protoc needs to be told explicitly that there is no parameter or return value.

Writing the GRPC Server

The server implementation will also be similar. We call Impl here which will have our concrete plugin implementation.

1
2
3
4
5
6
7
8
9
10
 // Here is the gRPC server that GRPCClient talks to. type GRPCServer struct {
// This is the real implementation
Impl Greeter
}
func (m *GRPCServer) Greet(
ctx context.Context,
req *proto.Empty) *proto.GreeterResponse {
v := m.Impl.Greet()
return &proto.GreeterResponse{Message: v}
}

And we will use the protoc defined message response. v will have the response from Greet which will be Hello! provided by the concrete plugin’s implementation. We then transform that into a protoc type by setting the Message property on the GreeterResponse struct provided by the automatically generated protoc stub code. Easy, right?

Writing the plugin itself

The whole thing looks much like the RPC implementation with just a few small modifications and changes. This can sit completely outside of everything, or can even be provided by a third party implementor.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 // Here is a real implementation of KV that writes to a local file with // the key name and the contents are the value of the key. type Greeter struct{}
func (Greeter) Greet() error {
return "Hello!"
}
func main() {
plugin.Serve(&plugin.ServeConfig{
HandshakeConfig: shared.Handshake,
Plugins: map[string]plugin.Plugin{
"greeter": &shared.GreeterGRPCPlugin{Impl: &Greeter{}},
},

// A non-nil value here enables gRPC serving for this plugin...
GRPCServer: plugin.DefaultGRPCServer,
})
}

Calling it all in the main

Once all that is done, the main function looks the same as RPC’s main but with some small modifications.

1
2
3
4
5
6
 // We're a host. Start by launching the plugin process. client := plugin.NewClient(&plugin.ClientConfig{
HandshakeConfig: shared.Handshake,
Plugins: shared.PluginMap,
Cmd: exec.Command("./plugin/greeter"),
AllowedProtocols: []plugin.Protocol{plugin.ProtocolGRPC},
})

The NewClient now defines AllowedProtocols to be ProtocolGRPC. The rest is the same as before calling Dispense and value hinting the plugin to the correct type then calling Greet().

Conclusion

This is it. We made it! Now our plugin works over GRPC with a defined API by protoc. The plugin’s implementation can live where ever we want it to, but it needs some shared data. These are:

  • The generated code by protoc
  • The defined plugin interface
  • The GRPC Server and Client

These need to be visible by both the Client and the Server. The Server here is the plugin. If you are planning on making people be able to extend your application with go-plugin, you should make these available as a separate SDK. So people won’t have to include your whole project just to implement an interface and use protoc. In fact, you could also extract the protoc definition into a separate repository so that your SDK can also pull it in. You will have three repositories:

  • Your application;
  • The SDK providing the interface and the GRPC Server and Client implementation;
  • The protoc definition file and generated skeleton ( for Go based plugins).

Other languages will have to generate their own protoc code, and includ it into the plugin; like the Python implementation example located here: Go-plugin Python Example. Also, read this documentation carefully: non-go go-plugin. This document will also clarify what 1|1|tcp|127.0.0.1:1234|grpc means and will dissipate the confusion around how plugins work. Lastly, if you would like to have an in-depth explanation about how go-plugin came to be, watch this video by Mitchell: go-plugin explanation video. I must warn you though- it’s an hour long. But worth the watch!
That’s it. I hope this has helped to clear the confusion around how to use go-plugin.

Happy plugging!

Gergely.

背景

公司的一款产品是面向中小企业的内网安全日志采集与分析产品,使用Spring Boot & Spring Cloud框架搭建,使用ElasticSearch进行存储,采用一体机的服务器(配置为4核CPU,32G,1T硬盘),使用CentOS7.8+Docker方式部署应用。

该产品的主要业务流程是,通过探针(探针会部署在多个工控机上)采集触发安全引擎的扫描,产生日志,之后探针采集这些日志并将其推送Kafka,平台端消费Kafka,解析和分析日志并存储到ElasticSearch,之后通过分析查询服务提供最终数据展示。

系统一直运行相对比较平稳,直到产品对日志增加了事件分组的要求,即当日志事件产生时,需要根据一定的规则判断是否当前日志与前一条日志是否未同一组事件。因为需要实时的判断事件的分组,在数据入库ES时,增加了大量的聚合运算。此时ES开始占用大量内存,整体响应慢,造成日志数据入库延时,和分析查询业务无法正常返回数据。

阅读全文 »

4月定投课堂读书会的书目是《原则2:应对变化中的世界秩序》。知道如何应对我所不知道的东西。

第一周

在作者看来,三个最重要的周期分别是什么?

长期债务和资本市场周期、内部秩序和混乱周期、外部秩序和混乱周期

典型的大周期分为几个阶段?你所在的国家正处在哪个阶段?

上升阶段、顶部阶段和下跌阶段。目前处于上升阶段阶段。

三大周期、国家及其货币兴衰的3、5、8和18个决定因素是什么?

3个决定因素:有利和不利的金融周期、内部秩序和混乱的周期、外部秩序和混乱周期

5个决定因素:有利和不利的金融周期、内部秩序和混乱的周期、外部秩序和混乱周期、创新和技术、天灾

8个决定因素:教育,竞争力,创新和技术、经济产出、世界贸易份额、军事实力、金融中心实力、储备货币地位。

18个决定因素:3大周期(经济大周期、内部秩序和混乱周期、外部秩序和混乱周期),8个实力指标(教育、创新和技术、成本竞争力、军事实力、贸易、经济产出、市场和金融中心、储备货币地位),7个额外因素(地质、资源分配效率、自然现象、基础设施和投资、品格/教养/决心,国家治理/法规,财富机会和价值观差距)

长期债务周期包括几个阶段?各个阶段特点是什么?央行和政府在其中扮演着什么角色?思考为什么不应该依靠政府保护你的钱财?

6个阶段:

1.最初a)并不存在债务,或者债务很少,b)人们使用硬通货;

2.后来出现了硬通货债券票据;

3)后来是债务增加;

4)然后会发生债务危机、违约和货币贬值,导致印钞和与硬通货脱钩;

5)然后是法定货币,最终导致贬值;

6)回归硬通货。

大多数政府会滥用它们作为货币和信贷创造者与使用者的特权;没有任何一个决策者能够掌控整个长期债务周期。

三种类型的货币体系分别是什么?彼此之间是如何转换的?

三种类型的货币体系是硬通货,纸币,法定货币。

当信誉度最大化,信贷创造最小化的时,从第一类(硬通货)转向第二类(纸币)。

当信贷创造扩大,信誉度下降时,从第二类(纸币)转向第三类(法定货币)。

当信贷创造最大化,信誉度最小化时,从第三类(法定货币)回归第一类(硬通货)。

第二周

所有货币都出现过贬值,货币贬值具有哪些共性?

  • 所有经济体出现过典型的挤兑现象,即央行发行的债券票据总额超过可提取的硬通货总额。
  • 央行的的净储备在实际贬值前就已开始减少,在某些央行,甚至在货币贬值前几年就开始减少。
  • 货币挤兑和贬值的出现通常伴随着严重的债务问题。
  • 通常来说,央行最初的反应是,提高短期利率,但从经济角度看,这种做法过于痛苦,因此央行很快会放弃这种做法,转而增加货币供应。
  • 不同国家的结果截然不同,一个重要变量是,该国在货币贬值是保存了多少经济和军事实力

作者发现,在大多数国家,虽然人们会在意识形态和宗教方面发生争斗,但影响大多数人的最重要因素是?

影响大多数人的最主要因素是人们如何创造、获取和分配财富与权力。

内部秩序和混乱大周期包括几个阶段?哪些信号表明美国处于第五个阶段(财政状况糟糕、冲突激烈)?

第一阶段,新秩序开始,新领导巩固权力

第二阶段,资源配置体系与政府官僚机构建立和完善。

第三阶段,出现和平与繁荣。

第四阶段,支出和债务严重过度,贫富差距和政治分歧扩大。

第五阶段,财政状况糟糕,冲突激烈

第六阶段,出现内战/革命。

在今天的美国,联邦政府及许多州和城市的政府面临着巨额赤字、巨额债务和巨大的贫富差距,美联储正在大规模增印货币,购买大量联邦政府债券,为联邦政府支出提供资金,而目前联邦政府的支出远大于收入。

外部秩序变化背后的永恒普适力量是什么?

国内实力和军事实力密切相关。

在财力上超过对手是一个国家能够拥有的最大优势之一。

在以下两种情况下,爆发军事战争的风险最大:1)双方的军事实力旗鼓相当,2)双方在生存问题上存在不可调和的分歧。

要想获取更多的双赢结果,双方必须进行良好的协商,既考虑到对方也考虑到自身的优先关注点,并懂得妥善的在二者之间进行权衡。

获胜意味着在不失去最重要的东西的前提下,得到最重要的东西。所以,如果丧失的生命和金钱超过带来的益处,这样的战争就是愚蠢的。

获取权力,尊重权利,并明智的运用权力。

从投资者的角度来看大周期的整体格局,大多数投资者面临的三大风险是什么?

大多数投资者面临的三大风险是,投资组合将无法提供支出所需的回报,投资组合将面临破产,以及很大一部分财富会被收走。

第三周

荷兰帝国兴起的过程中,两个最重要的发明是什么?为何衰退?

荷兰人的两个最重要的发明是:1)效能及其优异的帆船,使他们能在世界各地航行,这一点再加上他们从欧洲战事中获得的军事能力,使他们能收集巨大的财富;2)为这些行动提供推动力的资本主义。

负债过多,国内发生许多财富争斗,军事实力衰落,受到英国的挑战,荷兰盾贬值,失去储备货币地位。

大英帝国是如何取代荷兰帝国,成为新兴帝国的?如何衰退?

对法制的尊重,有力的教育,为英国在商业和创新方面取得竞争优势奠定了基础。受过良好教育的人口,重视发明创造的文化,新想法的发展能够得到资本的经济支持,这些因素创造了一大波竞争力提升与繁荣。英国的军事实力,特别是海军、帮助它建立殖民地,多去其他欧洲国家的殖民地,并确保对全球贸易路线的控制。伦敦成为世界金融中心,英镑成为世界储备货币。

竞争力下降,不平等和冲突加剧,新的竞争对手崛起,特别是德国和美国。二战后英国背负沉重债务,维持帝国的代价高于盈利,英镑贬值,失去国际储备货币地位。

美帝国是如何兴起并走向顶峰取代大英帝国,成为新兴帝国的?

美国通过革命建立了一个新的国内秩序,第二次工业革命创造了收入、技术和财富的巨大收益。二次世界大战美国都是大赢家。

美国在自身大周期中所处的位置是?哪些典型的标志和信号?

作者分析美国在自身大周期中处于第五阶段(约处于大周期70%的位置,误差范围为10个百分点)。

三个重要的标志:1)规则被无视,2)双方相互的情绪化攻击,3)发生流血事件。

为什么帝国的衰退不可避免?(无论荷兰、英国还是美国)

让帝国强大的力量减弱,敌对力量出现。

第四周

从1949到现在的崛起,可以把中国发展分为几个阶段?

可以分为三个阶段,1949~1976,1978~2008,2008年至今

中美关系为何从共生关系走向冲突?

美国由债务增长支撑的繁荣时期引发了债务泡沫,造成贫富差距,中国实力不断加强,两国从互惠互利的关系变成激烈的竞争关系,冲突加剧。

美国和中国的现状分别是?有几种类型的战争?

因为打周期的规律,美国正在衰弱(周期的第五阶段),中国正在崛起(周期的第三阶段)。

有七大类型的战争:1)贸易/经济战;2)技术战;3)地缘政治战;4)资本战;5)军事战;6)文化战;7)自我交战的战争

作者对于未来的观点是基于作者对进化、周期、指标三点的想法,分别是?

人类的创造性可能会带来更大的进步,但同时债务/经济周期、内部秩序和混乱周期、外部秩序和混乱周期与不断恶化的自然灾害,几乎可以肯定会构成问题。换言之,人类的创造性和这些其他挑战,将存在一场斗争。

各国内部和各国之间情况极为不同,这将决定哪些国家将以何种方式崛起,哪些国家将以何种方式衰弱。

基于对过去的分析和未来的预测,达利欧建议应该如何应对已知和未知?有哪几条原则?

了解所有的可能性,考虑最坏的情况,然后想办法消除无法忍受的情况。

分散风险。

首先考虑延迟满足而不是当下满足,这样你将来会过得更好。

与最聪明的人反复沟通。