Question Details

No question body available.

Tags

postgresql polardb

Answers (3)

February 28, 2026 Score: 2 Rep: 30,532 Quality: Medium Completeness: 100%

I am aware jsonbpathset exists in PostgreSQL 16, but I am constrained to version 14.

There's no such thing in 16, 17 or 18. It doesn't look like it's on the roadmap for 19.


  1. Is there a way in PostgreSQL 14 to update a JSONB array element by searching for a field value (like id = 2) without manually calculating the index first?

Unfortunately, no. You need to determine the location of the value you plan to overwrite. Or, filter it out regardless of where it is1, 2, 3, then add its updated version elsewhere.


  1. How can I minimize the concurrency impact? (e.g., is there a way to lock only the specific JSON element, or is row-level locking unavoidable?)

An update rewrites not just the whole jsonb value, but the whole row with the value in it. Possibly two, if it's big enough to get TOASTed. That's because MVCC doesn't do in-place updates and needs to hold on to both the old and the new versions of the row for some time.

MVCC stops at row level. There's no column/field/cell/value-level locking and what you're describing sounds like sub-column level locking behaviour, requiring the ability for MVCC to go deeper into the fields of composites, jsonb's, arrays, multiranges inside the row. Maybe even digits of a numeric, substrings/individual characters of a text if you're feeling adventurous.

While it's not natively supported, you can remodel/normalise to do this sort of locking using 6NF or EAV. Some extensions pretend to do that (e.g. Apache AGE), but in reality they just normalise things under the hood. That's what you need to do, too.


  1. Given these limitations, is it best practice to refactor this model (normalize addresses into a separate table) rather than trying to optimize the JSONB update?

That's correct, normalise it. That way addresses array becomes a regular table, its elements become rows, and everyone is free to modify those concurrently, locking at row level. If you really need to be able to retrieve this data as a jsonb, add a view or a materialized view.

Note that you only need the 6NF/EAV thing if you really need cell-level locking. If row-level's fine:
demo at dbfiddle

create table userprofiles ( userid int generated by default as identity primary key, username text, notificationson boolean default true, theme text default 'dark', language text default 'en' ); create table addresses( userid int references userprofiles, type text default 'home', city text, addressid int generated by default as identity primary key ); with up as ( insert into userprofiles(username)values('JohnDoe') returning userid) insert into addresses select * from up cross join(values('home','New York'), ('work','Boston'), ('vacation','Miami'))as v(type,city) returning *;
userid type city address_id
1 home New York 1
1 work Boston 2
1 vacation Miami 3
February 28, 2026 Score: 2 Rep: 258,574 Quality: Low Completeness: 10%

There is no simple way to do that. And even if there were, modifying a single attribute means to store a new copy of the entire JSON.

If you find yourself modifying JSON attributes often, you may have picked the wrong data model.

February 28, 2026 Score: 1 Rep: 568,192 Quality: Medium Completeness: 100%

Questions:

Is there a way in PostgreSQL 14 to update a JSONB array element by searching for a field value (like id = 2) without manually calculating the index first?

No.

How can I minimize the concurrency impact? (e.g., is there a way to lock only the specific JSON element, or is row-level locking unavoidable?)

Keep transactions as short as possible. In other words, finish the transaction with commit or rollback as soon as possible after your code locks rows. Locks are released automatically when the transaction finishes.

There is no way to lock with finer granularity than a row.

Given these limitations, is it best practice to refactor this model (normalize addresses into a separate table) rather than trying to optimize the JSONB update?

Yes. See below.


In the current version of PostgreSQL, you could use JSONTABLE() to find address entries by their id, but the version you are using doesn't support that function (it was introduced in PG 17).

WITH cte AS (
  SELECT u.userid, j.idx-1 AS idx
  FROM userprofiles AS u 
  CROSS JOIN JSONTABLE(u.profiledata, '$.addresses[*]' COLUMNS(
      idx FOR ORDINALITY,
      id INT PATH '$.id',
      type TEXT PATH '$.type',
      city TEXT PATH '$.city'
  )) AS j
  WHERE j.id = 1
)
UPDATE userprofiles AS u
SET profiledata = jsonbset(profiledata, 
    ('{addresses,'||cte.idx||',city}')::text[], 
    '"Cambridge"'::jsonb)
FROM cte WHERE cte.userid = u.userid;

This will lock the whole row, and it will replace the whole JSONB document. As far as I know, PostgreSQL does not support in-place updates of individual JSONB elements. The jsonbset() function returns a whole JSONB document, and that whole document replaces the JSONB value in the table.

It's also ugly, complex code that is harder to read than it should be.

It would be much simpler to use normal rows and columns for your addresses, instead of JSONB. Example:

CREATE TABLE useraddresses (
    userid INT REFERENCES userprofile,
    addressid INT,
    PRIMARY KEY (userid, addressid),
    type VARCHAR(16) NOT NULL,
    city VARCHAR(64) NOT NULL
);

This makes it much easier to do all the things you wanted:

  • Find the row with id=1 for a given user.

  • Lock only one address entry, not the whole user profile.

  • Update one city, not the whole user profile.

  • Works with PostgreSQL 14 (note: you should upgrade anyway, since PG 14 will be end-of-life later this year).

INSERT INTO useraddresses VALUES
(1, 1, 'home', 'New York'),
(1, 2, 'work', 'Boston'),
(1, 3, 'vacation', 'Miami');

UPDATE useraddresses AS a SET city = 'Cambridge' FROM userprofiles AS p WHERE p.username = 'johndoe' AND p.userid = a.userid AND a.addressid = 1;

Also this UPDATE is far simpler code, it's just a straightforward UPDATE, instead of involving a CTE and JSONTABLE() and JSON path expressions. It's a lot easier to read, debug, and maintain this code.

Using JSON/JSONB in an SQL database is most often the wrong choice, full stop. It makes many kinds of queries more complex and less efficient. It also takes a lot more space to store data in JSON/JSONB formats than if you use normal rows and columns.

I wrote a chapter about using JSON/JSONB in my book, More SQL Antipatterns.