Yeah this has been possible for years with no funky sharding etc, but given the memory and cpu limitations it doesn’t make much sense in most scenarios.
If on the other hand you have an application that just needs to dump/archive data into an ever growing table and you like working with SQL Server, then sure this could be an option. Now I’m not endorsing this, but yes it is possible.
Of course there is always a catch :)
Below shows that a hard limit is present i.e. when attempting to exceed 10GB in SQL Server 2016:
SET NOCOUNT ON; CREATE DATABASE d; GO USE d; ALTER DATABASE d SET RECOVERY SIMPLE WITH NO_WAIT; ALTER DATABASE [d] MODIFY FILE ( NAME = N'd', FILEGROWTH = 10GB ); CREATE TABLE t (c1 nchar(4000)); GO INSERT INTO t(c1) VALUES ('1'); GO 700 /* delete test database USE master; GO DROP DATABASE d; */
Now say you try the same thing with the master database and say bump it up to 15GB for fun:
SET NOCOUNT ON; USE [master]; ALTER DATABASE [master] MODIFY FILE ( NAME = N'master', FILEGROWTH = 15GB ); CREATE TABLE t (c1 nchar(4000)); GO INSERT INTO t(c1) VALUES ('1'); GO 700
It succeeds with no issue and the database is now its 15GB in size:
SELECT SERVERPROPERTY('Edition') AS [version],(size*8/1024) AS size_mb FROM sys.master_files WHERE 1=1 AND DB_NAME(database_id) = 'master' AND [type_desc] = 'ROWS';
You can also exceed the limit with model and tempdb but there is little point and will likely introduce issues as new databases (based on model) can’t be created.