postgresql 中的 large object

//主要有下面两个系统表,pg_largeobject_metadata主要记录权限的,有用的只有pg_largeobject



CATALOG(pg_largeobject_metadata,2995)
{
	Oid			lomowner;		/* OID of the largeobject owner */
#ifdef CATALOG_VARLEN			/* variable-length fields start here */
	aclitem		lomacl[1];		/* access permissions */
#endif
} FormData_pg_largeobject_metadata;

/*
* Each "page" (tuple) of a large object can hold this much data
*
* We could set this as high as BLCKSZ less some overhead,but it seems
* better to make it a smaller value,so that not as much space is used
* up when a page-tuple is updated. Note that the value is deliberately
* chosen large enough to trigger the tuple toaster,so that we will
* attempt to compress page tuples in-line. (But they won't be moved off
* unless the user creates a toast-table for pg_largeobject...)
*
* Also,it seems to be a smart move to make the page size be a power of 2,* since clients will often be written to send data in power-of-2 blocks.
* This avoids unnecessary tuple updates caused by partial-page writes.
*/
#define LOBLKSIZE(BLCKSZ / 4)

CATALOG(pg_largeobject,2613) BKI_WITHOUT_OIDS
{
	Oid			loid;			/* Identifier of large object */
	int4		pageno;			/* Page number (starting from 0) */
	/* data has variable length,but we allow direct access; see inv_api.c */
	bytea		data;			/* Data for page (may be zero-length) */
} FormData_pg_largeobject;



所有的大对象全部拆成 LOBLKSIZE大小,放入 pg_largeobject 之中,通过 loid进行区分,每个大对象拆分成的各个部分通过 pageno(叫元组序号更合理),进行顺序的关联起来,

pageno是通过 要写入的数据offset / LOBLKSIZE计算的

pg的大对象实现的比较简单,并不适合大数据量的使用,会成为系统的瓶颈.

相关文章

文章浏览阅读601次。Oracle的数据导入导出是一项基本的技能,...
文章浏览阅读553次。开头还是介绍一下群,如果感兴趣polardb...
文章浏览阅读3.5k次,点赞3次,收藏7次。折腾了两个小时多才...
文章浏览阅读2.7k次。JSON 代表 JavaScript Object Notation...
文章浏览阅读2.9k次,点赞2次,收藏6次。navicat 连接postgr...
文章浏览阅读1.4k次。postgre进阶sql,包含分组排序、JSON解...