searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

关于edgecore针对大页内存的内存段错误

2023-09-13 06:41:28
7
0

  使用容器部署业务组件,有比较好的是可以进行不同业务组件的资源隔离,通过cgroup 来进行限制,但是在实际应用中,还是存在一些问题的.

进行大页的内存限制大小,但是会发现业务组件在跑一段时间会发生 SIGBUS 信号的错误,导致业务组件进行重启.

  查看了官网文档可得到下面的关键信息:

hugetlb.<hugepagesize>.rsvd.limit_in_bytes
hugetlb.<hugepagesize>.rsvd.max_usage_in_bytes
hugetlb.<hugepagesize>.rsvd.usage_in_bytes
hugetlb.<hugepagesize>.rsvd.failcnt

The HugeTLB controller allows to limit the HugeTLB reservations per control
group and enforces the controller limit at reservation time and at the fault of
HugeTLB memory for which no reservation exists. Since reservation limits are
enforced at reservation time (on mmap or shget), reservation limits never causes
the application to get SIGBUS signal if the memory was reserved before hand. For
MAP_NORESERVE allocations, the reservation limit behaves the same as the fault
limit, enforcing memory usage at fault time and causing the application to
receive a SIGBUS if it's crossing its limit.

Reservation limits are superior to page fault limits described above, since
reservation limits are enforced at reservation time (on mmap or shget), and
never causes the application to get SIGBUS signal if the memory was reserved
before hand. This allows for easier fallback to alternatives such as
non-HugeTLB memory for example. In the case of page fault accounting, it's very
hard to avoid processes getting SIGBUS since the sysadmin needs precisely know
the HugeTLB usage of all the tasks in the system and make sure there is enough
pages to satisfy all requests. Avoiding tasks getting SIGBUS on overcommited
systems is practically impossible with page fault accounting.

然后再本地进行观察了 /sys/fs/cgroup/hugetlb/kubepods.slice/ 目录下,对于的 hugetlb.2MB.limit_in_bytes 、hugetlb.2MB.usage_in_bytes、hugetlb.2MB.max_usage_in_bytes 信息。

发现查看大页的分配情况:

HugePages_Total:     100
HugePages_Free:       81
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

得出在 进行设置 sysctl vm.nr_hugepages 的大页个数发生了变化,但是 cgroup 下的 limit 值是没有变更的。

导致业务组件分配内存是成功的,但是使用分配内存时,会触发导致 sigbus 错误。很容易让人误解是业务组件

程序的问题导致的内存问题,但是其实是因为 limit 发生了变化.

后面查看定位到是 edgecore 这边在进行设置cgroup 的 hugePage 的limit 资源没控制好,没有即时根据机器层面

的hugePage 资源的变化,进行变更 limit 值, 引发的错误。

0条评论
0 / 1000
谢****裕
3文章数
0粉丝数
谢****裕
3 文章 | 0 粉丝
谢****裕
3文章数
0粉丝数
谢****裕
3 文章 | 0 粉丝
原创

关于edgecore针对大页内存的内存段错误

2023-09-13 06:41:28
7
0

  使用容器部署业务组件,有比较好的是可以进行不同业务组件的资源隔离,通过cgroup 来进行限制,但是在实际应用中,还是存在一些问题的.

进行大页的内存限制大小,但是会发现业务组件在跑一段时间会发生 SIGBUS 信号的错误,导致业务组件进行重启.

  查看了官网文档可得到下面的关键信息:

hugetlb.<hugepagesize>.rsvd.limit_in_bytes
hugetlb.<hugepagesize>.rsvd.max_usage_in_bytes
hugetlb.<hugepagesize>.rsvd.usage_in_bytes
hugetlb.<hugepagesize>.rsvd.failcnt

The HugeTLB controller allows to limit the HugeTLB reservations per control
group and enforces the controller limit at reservation time and at the fault of
HugeTLB memory for which no reservation exists. Since reservation limits are
enforced at reservation time (on mmap or shget), reservation limits never causes
the application to get SIGBUS signal if the memory was reserved before hand. For
MAP_NORESERVE allocations, the reservation limit behaves the same as the fault
limit, enforcing memory usage at fault time and causing the application to
receive a SIGBUS if it's crossing its limit.

Reservation limits are superior to page fault limits described above, since
reservation limits are enforced at reservation time (on mmap or shget), and
never causes the application to get SIGBUS signal if the memory was reserved
before hand. This allows for easier fallback to alternatives such as
non-HugeTLB memory for example. In the case of page fault accounting, it's very
hard to avoid processes getting SIGBUS since the sysadmin needs precisely know
the HugeTLB usage of all the tasks in the system and make sure there is enough
pages to satisfy all requests. Avoiding tasks getting SIGBUS on overcommited
systems is practically impossible with page fault accounting.

然后再本地进行观察了 /sys/fs/cgroup/hugetlb/kubepods.slice/ 目录下,对于的 hugetlb.2MB.limit_in_bytes 、hugetlb.2MB.usage_in_bytes、hugetlb.2MB.max_usage_in_bytes 信息。

发现查看大页的分配情况:

HugePages_Total:     100
HugePages_Free:       81
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

得出在 进行设置 sysctl vm.nr_hugepages 的大页个数发生了变化,但是 cgroup 下的 limit 值是没有变更的。

导致业务组件分配内存是成功的,但是使用分配内存时,会触发导致 sigbus 错误。很容易让人误解是业务组件

程序的问题导致的内存问题,但是其实是因为 limit 发生了变化.

后面查看定位到是 edgecore 这边在进行设置cgroup 的 hugePage 的limit 资源没控制好,没有即时根据机器层面

的hugePage 资源的变化,进行变更 limit 值, 引发的错误。

文章来自个人专栏
文章 | 订阅
0条评论
0 / 1000
请输入你的评论
0
0