Question Details

No question body available.

Tags

postgresql entity-framework-core concurrency jsonb sql-returning

Answers (2)

March 8, 2026 Score: 5 Rep: 258,679 Quality: Medium Completeness: 90%

The answer depends on the transaction isolation level in use:

  • if you are using the default isolation level READ COMMITTED, PostgreSQL will behave exactly like you want it to: the second transaction will be blocked until the first one commits, and it will see the row in the state into which the first transaction put it

  • if you are using the higher isolation levels REPEATABLE READ or SERIALIZABLE, the second transaction will also block, but it will throw a serialization error when the first transaction commits

March 9, 2026 Score: 0 Rep: 1,047 Quality: Low Completeness: 60%

If you run this:

drop table if exists "Media";
drop function slow; 

create table "Media" ( "ContainerName" int, "BlobName" int, "Thumbnails" jsonb NOT NULL DEFAULT '[]'::jsonb) ; insert into "Media" select 1 , generateseries(10,20);

create function slow(valuein int, sleep int) returns int as 'select valueout from (select valuein as valueout, pgsleep(sleep));' language sql;

\! psql -c 'UPDATE "Media" SET "Thumbnails" = "Thumbnails" || '"'"'{"url":1}'"'"'::jsonb WHERE slow("ContainerName",1)=1 AND "BlobName" = 12 RETURNING ' & \! psql -c 'UPDATE "Media" SET "Thumbnails" = "Thumbnails" || '"'"'{"url":2}'"'"'::jsonb WHERE slow("ContainerName",0)=1 AND "BlobName" = 12 RETURNING ' &

Where I use a slow() function to artificially make the request B making progress faster, the request A returns:

 ContainerName | BlobName |        Thumbnails
---------------+----------+--------------------------
             1 |       12 | [{"url": 2}, {"url": 1}]
(1 row)

There's a very low chance that it happens with index access, but concurrent full scans may start at different point and observe the same.

Explanation is: in Read Committed, when it encounters a row that has been modified and committed since the begining of the query, PostgreSQL uses the new version (pros: it doesn't fail on write conflict, like in repeatable read, cons: the read time is not consistent with other rows, if other rows were read)