From 3d5ddcd2bea70d00e09ed3e06ae956c756b7a312 Mon Sep 17 00:00:00 2001 From: Shay Rojansky Date: Thu, 12 Mar 2026 19:35:28 +0200 Subject: [PATCH] Remove large object doc page Also add breaking change note for multiplexing See https://github.com/npgsql/npgsql/pull/6493 --- conceptual/Npgsql/large-objects.md | 38 ------------------------- conceptual/Npgsql/release-notes/11.0.md | 16 +++++++++++ 2 files changed, 16 insertions(+), 38 deletions(-) delete mode 100644 conceptual/Npgsql/large-objects.md create mode 100644 conceptual/Npgsql/release-notes/11.0.md diff --git a/conceptual/Npgsql/large-objects.md b/conceptual/Npgsql/large-objects.md deleted file mode 100644 index a38d1534..00000000 --- a/conceptual/Npgsql/large-objects.md +++ /dev/null @@ -1,38 +0,0 @@ -# Large Objects - -The Large Objects feature is a way of storing large files in a PostgreSQL database. Files can normally be stored in bytea columns but there are two downsides; a file can only be 1 GB and the backend buffers the whole file when reading or writing a column, which may use significant amounts of RAM on the backend. - -With the Large Objects feature, objects are instead stored in a separate system table in smaller chunks and provides a streaming API for the user. Each object is given an integral identifier that is used for accessing the object, that can, for example, be stored in a user's table containing information about this object. - -## Example - -```csharp -// Retrieve a Large Object Manager for this connection -var manager = new NpgsqlLargeObjectManager(Conn); - -// Create a new empty file, returning the identifier to later access it -uint oid = manager.Create(); - -// Reading and writing Large Objects requires the use of a transaction -using (var transaction = Conn.BeginTransaction()) -{ - // Open the file for reading and writing - using (var stream = manager.OpenReadWrite(oid)) - { - var buf = new byte[] { 1, 2, 3 }; - stream.Write(buf, 0, buf.Length); - stream.Seek(0, System.IO.SeekOrigin.Begin); - - var buf2 = new byte[buf.Length]; - stream.Read(buf2, 0, buf2.Length); - - // buf2 now contains 1, 2, 3 - } - // Save the changes to the object - transaction.Commit(); -} -``` - -## See also - -See the [PostgreSQL documentation](http://www.postgresql.org/docs/current/static/largeobjects.html) for more information. All functionality are implemented and wrapped in the classes `NpgsqlLargeObjectManager` and `NpgsqlLargeObjectStream` using standard .NET Stream as base class. diff --git a/conceptual/Npgsql/release-notes/11.0.md b/conceptual/Npgsql/release-notes/11.0.md new file mode 100644 index 00000000..81e30782 --- /dev/null +++ b/conceptual/Npgsql/release-notes/11.0.md @@ -0,0 +1,16 @@ +# Npgsql 11.0 Release Notes + +Npgsql version 11.0 is in development. + +> [!NOTE] +> We're considering to start dropping support for synchronous APIs (`NpgsqlConnection.Open`, `NpgsqlCommand.ExecuteNonQuery`, etc.) starting with Npgsql 11.0. The current plan is to deprecate the API by throwing a runtime exception by default (with a switch to re-enable synchronous I/O) for Npgsql 11.0, while possibly completely removing it for Npgsql 12.0. This is in line with ASP.NET Core and .NET APIs in general, which are moving in the direction of async I/O only (for example, `System.IO.Pipelines` doesn't have synchronous I/O). If you have any questions or want to share you experience/issues with async I/O, please feel free to post in the [issue](https://github.com/npgsql/npgsql/issues/5865). + +## Breaking changes + +### Removed multiplexing + +Previous versions of Npgsql supported a high-performance mode called "multiplexing". While this mode could increase throughput in certain extremely-high-performance scenarios, it did not scale in highly-concurrent situations with many CPU cores, and added quite a bit of complexity to the codebase. We'll evaluate re-introducing a similar implementation that's more scalable and better-designed. + +### Removed deprecated large object APIs + +Npgsql 11.0 removes some APIs to support PostgreSQL [large objects](https://www.postgresql.org/docs/current/largeobjects.html); these were obsolete since Npgsql 8.0. The APIs were old and had some design issues, and were also not necessary, as the corresponding PostgreSQL functions can simply be called directly.