Client-Side API
NukeBase's client library provides a real-time connection to your database through WebSockets. The client handles connection management, request tracking, and event dispatching automatically.
Connection Setup
The client automatically establishes a secure WebSocket connection:
<script src="NukeBaseSDK.js"></script>
// Connect to the server (returns a Promise)
connectWebSocket().then(() => {
console.log("Connected and ready to use NukeBase");
// Start using NukeBase methods here
});
The connection is automatically maintained:
- Reconnects when browser tabs regain focus
- Uses WSS for HTTPS sites, WS for HTTP sites
- Dispatches events for server notifications
Data Operations
Setting Data
The set() function creates or replaces data at a specific path:
Auto-creation: The set() function will automatically create any missing parent objects in the path. You don't need to create intermediate objects manually.
// Set a complete object
set("users.john", { name: "John Doe", age: 32 }).then(response => {
console.log("User created successfully");
});
// Set a single value
set("users.john.email", "john@example.com").then(response => {
console.log(response);
});
// Auto-creates parent objects - even if 'users' doesn't exist
set("users.alice.profile.preferences.theme", "dark").then(response => {
// Creates: { users: { alice: { profile: { preferences: { theme: "dark" } } } } }
console.log("Theme set with auto-created parent objects");
});
Getting Data
Retrieve data with the get() function:
// Get a single user
get("users.john").then(response => {
console.log(response.data); // User data
});
// Get entire collection
get("users").then(response => {
const users = response.data;
// Process users...
});
Updating Data
Update existing data without replacing unspecified fields:
Auto-creation: Like set(), the update() function will automatically create any missing parent objects in the path if they don't exist.
// Update specific fields
update("users.john", {
lastLogin: Date.now(),
loginCount: 42
}).then(response => {
console.log(response);
});
// Update a single property
update("users.john.status", "online").then(response => {
console.log(response);
});
// Auto-creates missing parent objects
update("settings.app.notifications.email", true).then(response => {
// If 'settings' doesn't exist, creates the entire path
console.log("Setting created with auto-generated parents");
});
Removing Data
Delete data at a specific path:
// Remove a user
remove("users.john").then(response => {
console.log("User deleted");
});
// Remove a specific field
remove("users.john.temporaryToken").then(response => {
console.log(response);
});
Querying Data
Query allows you to search through collections and find items that match specific conditions. The query
string uses JavaScript expressions where child
represents each item being evaluated:
How queries work: NukeBase iterates through each child at the specified path and
evaluates your condition. Items where the condition returns true
are included in the results.
// Basic equality check
query("users", "child.age == 32").then(response => {
console.log(response.data); // All users who are exactly 32
});
// Using comparison operators
query("products", "child.price < 50").then(response => {
console.log(response.data); // All products under $50
});
// Compound conditions with AND (&&)
query("products", "child.price < 100 && child.category == 'electronics'").then(response => {
console.log(response.data); // Affordable electronics
});
// Compound conditions with OR (||)
query("users", "child.role == 'admin' || child.role == 'moderator'").then(response => {
console.log(response.data); // All admins and moderators
});
// Text search with includes()
query("posts", "child.title.includes('JavaScript')").then(response => {
console.log(response.data); // Posts with "JavaScript" in the title
});
// Checking nested properties
query("users", "child.profile.location == 'New York'").then(response => {
console.log(response.data); // Users located in New York
});
// Combining multiple conditions
query("orders", "child.status == 'pending' && child.total > 100 && child.items.length > 2").then(response => {
console.log(response.data); // Large pending orders with multiple items
});
// Checking if a property exists
query("users", "child.premiumAccount == true").then(response => {
console.log(response.data); // All premium users
});
// Using NOT operator
query("tasks", "child.completed != true").then(response => {
console.log(response.data); // All incomplete tasks
});
// Date comparisons (assuming timestamps)
query("events", "child.date > " + Date.now()).then(response => {
console.log(response.data); // Future events
});
Query Syntax Reference
Queries support standard JavaScript operators and methods:
Operator/Method | Description | Example |
---|---|---|
== |
Equal to | child.status == 'active' |
!= |
Not equal to | child.deleted != true |
< , > , <= , >= |
Comparison | child.age >= 18 |
&& |
Logical AND | child.active && child.verified |
|| |
Logical OR | child.role == 'admin' || child.role == 'mod' |
.includes() |
String contains | child.email.includes('@gmail.com') |
.length |
Array/string length | child.tags.length > 3 |
Important: The child
variable represents each item at the path you're
querying. For example, when querying "users", child
represents each individual user object.
Real-time Subscriptions
Important: All subscription functions (getSub
, getSubChanged
,
querySub
, and querySubChanged
) immediately send the current data when the
subscription is created. This ensures your UI can display the current state right away, before any changes
occur.
Basic Subscriptions
Get real-time updates when data changes. All subscription functions immediately send the current data when the subscription is created, then continue to send updates whenever the data changes:
// Subscribe to changes on a path
const unsubscribe = getSub("value@users.john", event => {
// This fires immediately with current data, then on every change
console.log("User data:", event.data);
});
// When finished listening
unsubscribe().then(() => {
console.log("Unsubscribed successfully");
});
Query Subscriptions
Subscribe to data matching specific conditions:
// Subscribe to active users
const unsubscribe = querySub("value@users", "child.status == 'online'", event => {
// Receives all currently online users immediately, then updates
const onlineUsers = event.data;
updateOnlineUsersList(onlineUsers);
});
Changed-Only Subscriptions
Despite the name, these subscriptions ALSO receive the initial data immediately when created, then only fire again when data actually changes:
Important for getSubChanged and querySubChanged: What you receive depends on what path you're watching:
- If watching "users" and John updates his name, you get John's COMPLETE object (all fields)
- If watching "users.john" and a field changes, you get ONLY the changed field (e.g., just {name: "New Name"})
- If watching "users.john.name" and it changes, you get just the new name value
- The deeper your watch path, the more specific the change data
// getSubChanged - watching a collection
const unsubscribe = getSubChanged("value@users", event => {
// Initial: all users
// If John updates his email:
// event.data = { john: { name: "John", email: "new@email.com", age: 25 } }
// You get John's COMPLETE object
updateChangedUsers(event.data);
});
// getSubChanged - watching a specific user
const unsubscribe = getSubChanged("value@users.john", event => {
// Initial: John's complete data
// If John's email changes:
// event.data = { email: "new@email.com" }
// You get ONLY the changed field
Object.assign(currentUser, event.data); // Merge changes
});
// getSubChanged - watching a specific field
const unsubscribe = getSubChanged("value@users.john.status", event => {
// Initial: "online"
// If status changes:
// event.data = "offline"
// You get just the new value
updateStatusIndicator(event.data);
});
// With query filtering - returns only the changed items
const unsubscribe = querySubChanged("value@users",
"child.age > 21", event => {
// If user John (age 25) updates only his name:
// event.data = { john: { name: "John Doe", age: 25, email: "john@example.com" } }
// You get John's COMPLETE object, not just the changed name field
console.log("Users that changed:", event.data);
});
// Example: monitoring low stock products
const unsubscribe = querySubChanged("value@products",
"child.stock < 5", event => {
// If product ABC updates its price, you get:
// { ABC: { name: "Widget", stock: 3, price: 29.99 } }
// The complete product object for ONLY the product that changed
Object.keys(event.data).forEach(productId => {
updateSingleProduct(productId, event.data[productId]);
});
});
Operation-Specific Subscriptions
Listen for specific types of operations by prefixing your path with an operation type:
Available operation types:
value@
- Fires on any change (set, update, or remove)set@
- Fires only when data is created or completely replacedupdate@
- Fires only when existing data is partially updatedremove@
- Fires only when data is deleted
Compatibility: Operation prefixes work with all subscription functions:
getSub
, getSubChanged
, querySub
, and querySubChanged
.
// Listen only for updates to user data
const unsubscribe = getSub("update@users.john", event => {
console.log("User was updated:", event.data);
});
// Listen for new data being set
const unsubscribe = getSub("set@orders", event => {
console.log("New order created:", event.data);
});
// Listen for data removal
const unsubscribe = getSub("remove@users", event => {
console.log("A user was deleted:", event.path);
});
// Operation-specific with getSubChanged
const unsubscribe = getSubChanged("set@products", event => {
// Only fires when NEW products are created (not updates)
console.log("New products added:", event.data);
});
// Operation-specific with queries
const unsubscribe = querySub("update@users", "child.status == 'premium'", event => {
// Only fires when premium users are UPDATED (not created or deleted)
console.log("Premium users updated:", event.data);
});
// Combining with querySubChanged
const unsubscribe = querySubChanged("remove@tasks", "child.completed == true", event => {
// Only fires when completed tasks are DELETED
console.log("Completed tasks removed:", event.data);
});
// Default behavior without prefix (same as value@)
const unsubscribe = getSub("users.john", event => {
// Fires on ANY change: set, update, or remove
console.log("Something changed:", event.data);
});
Subscription Bubble-Up Behavior
Understanding how subscription changes propagate is crucial for designing efficient real-time applications. NukeBase subscriptions follow a "bubble-up" pattern:
Key Concept: Changes Bubble UP, Not DOWN
- Bubble UP ✅: Changes at child paths trigger parent subscriptions
- No Trickle DOWN ❌: Changes at parent paths do NOT trigger child subscriptions
// Set up subscriptions at different levels
getSub("value@calls", (event) => {
console.log("1. Calls level:", event.data);
});
getSub("value@calls.123", (event) => {
console.log("2. Specific call:", event.data);
});
getSub("value@calls.123.answer", (event) => {
console.log("3. Answer level:", event.data);
});
// Scenario 1: Change at deep level (bubbles UP)
await set("calls.123.answer", { type: "answer", sdp: "..." });
// ✅ Fires: 1. Calls level (bubbled up)
// ✅ Fires: 2. Specific call (bubbled up)
// ✅ Fires: 3. Answer level (direct match)
// Scenario 2: Change at middle level (bubbles UP, not DOWN)
await update("calls.123", { status: "active" });
// ✅ Fires: 1. Calls level (bubbled up)
// ✅ Fires: 2. Specific call (direct match)
// ❌ NOT fired: 3. Answer level (no trickle down)
// Scenario 3: Change at top level (no trickle DOWN)
await set("calls", { "456": { offer: {...} } });
// ✅ Fires: 1. Calls level (direct match)
// ❌ NOT fired: 2. Specific call (no trickle down)
// ❌ NOT fired: 3. Answer level (no trickle down)
Practical Implications:
- Parent subscriptions are "catch-all": Watching
users
will fire for ANY change in ANY user or their properties - Child subscriptions are specific: Watching
users.john.email
only fires when that exact path or its children change - Performance consideration: Higher-level subscriptions fire more frequently due to bubble-up
- Data replacement warning: If you
set()
at a parent level, child subscriptions may stop working as their paths no longer exist
Custom Server Functions
Execute custom logic on the server without exposing implementation details:
// Call the server function
wsFunction("addNumbers", {
num1: 5,
num2: 7
})
.then(response => {
// Display the result returned by the server
console.log(`The sum is: ${response.data}`); // Output: The sum is: 12
});
This straightforward example shows how WebSocket functions allow you to execute code on the server and return results directly to the client, with the return value accessible via the data property of the response.
File Operations
Upload files to your server:
// Upload from a file input
const fileInput = document.getElementById('profilePicture');
const file = fileInput.files[0];
setFile("users.john.profile.jpg", file).then(response => {
showSuccess("Profile picture uploaded!");
updateProfileImage(response.data.url);
});
The file upload process:
- Reads the file as an ArrayBuffer
- Adds path and filename metadata
- Sends the binary data over WebSocket
- Returns the server response (typically with a URL to access the file)
Authentication
NukeBase provides a built-in cookie-based authentication system. When you configure authPath: "users"
in your domain setup, authentication endpoints are automatically available and cookies are handled seamlessly.
How it works:
- Configure
authPath: "users"
in your domain setup - Use the built-in authentication endpoints from your client
- Server automatically sets HTTP cookies (uid, token)
- WebSocket connections automatically use these cookies
- User information populates the
admin
object for security rules
Authentication Endpoints
NukeBase automatically provides these authentication endpoints when authPath
is configured:
Available Endpoints:
- POST /auth - Login, registration, and anonymous user creation
- POST /logout - Clear authentication cookies
- POST /changepassword - Change user password (requires authentication)
Login and Registration
Use the /auth
endpoint to login existing users or register new ones:
// Login or register a user
async function login(username, password) {
const response = await fetch('/auth', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
credentials: 'same-origin', // Important for cookies
body: JSON.stringify({ username, password })
});
if (response.ok) {
const data = await response.json();
if (data.success) {
console.log('Authenticated as:', data.username || 'Anonymous');
// Reconnect WebSocket to use new auth cookies
if (socket && socket.readyState === WebSocket.OPEN) {
socket.close();
}
await connectWebSocket();
}
} else {
console.log('Authentication failed');
}
}
// Create anonymous user (no username/password)
async function loginAnonymous() {
const response = await fetch('/auth', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
credentials: 'same-origin'
// No body - creates anonymous user
});
if (response.ok) {
const data = await response.json();
console.log('Anonymous user created:', data.uid);
await connectWebSocket();
}
}
Logout
Clear authentication cookies to log out the user:
async function logout() {
const response = await fetch('/logout', {
method: 'POST',
credentials: 'same-origin'
});
if (response.ok) {
const data = await response.json();
if (data.success) {
console.log('Logged out successfully');
// Reconnect as anonymous user
if (socket && socket.readyState === WebSocket.OPEN) {
socket.close();
}
await connectWebSocket();
}
}
}
Change Password
Allow authenticated users to change their password:
async function changePassword(newPassword) {
const response = await fetch('/changepassword', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
credentials: 'same-origin',
body: JSON.stringify({ newPassword })
});
if (response.ok) {
const data = await response.json();
if (data.success) {
console.log('Password changed successfully');
} else {
console.log('Failed to change password');
}
}
}
Using Authentication in Security Rules
Once authenticated, the admin
object is available in your security rules:
// In your rules.js
module.exports = {
"users": {
"$userId": {
// Anyone can read profiles
"read": "true",
// Only the user themselves can edit
"write": "admin.uid == $userId",
"private": {
// Private data only visible to the user
"read": "admin.uid == $userId"
}
}
},
"adminPanel": {
// Only users with admin role can access
"read": "admin.role == 'admin'",
"write": "admin.role == 'admin'"
}
};
Security Notes:
- Use HTTPS in production to protect cookies
- Regularly clean up expired tokens to prevent database bloat
- Consider implementing rate limiting on login attempts
- The
generateRequestId()
function creates secure 8-character tokens
Database Structure for Authentication
The authentication system expects user data to be structured like this:
{
"users": {
"ML96SDE5": { // Unique user UID
"auth": {
"username": "matt123", // Unique username
"password": "helloworld", // Password
"role": "admin" // Optional role for permissions
"tokens": {
"WRL75TPY": 1748357368415, // token: expiry timestamp
"V1WM3FR2": 1748357670935
}
},
"profile": {
// Other user data
}
}
}
}
Token Management: Tokens are stored as key-value pairs where the key is the token
(generated with generateRequestId()
) and the value is the expiration timestamp. This makes it
easy to clean up expired tokens and validate sessions.
Response Format
All NukeBase operations return a standardized response object:
{
// The operation performed
action: "get",
// Data from the operation
data: {
"user123": { name: "John", age: 32 },
"user456": { name: "Jane", age: 28 }
},
// For tracking the request
requestId: "RH8HZX9P",
// Success or Failed
status: "Success"
}
When an error occurs, the response includes:
{
status: "Failed",
message: "Error description here"
}
Server-Side API
NukeBase is designed for simplicity. To get up and running, you only need:
- server/database.exe: The core database engine
- server/data.json: Your database file
- server/rules.js: JSON security rules (coming soon)
- server/app.js: Your configuration file
- public/(index.html, ...): For serving HTML files
That's it! No complex configuration or additional services required.
Setup and Initialization
NukeBase server runs as a Node.js application with a simple setup process. The key components are:
- database.exe: Core database engine that must be in your project's root directory
- app.js: Configuration file that sets up domains, middleware, and event handlers
Basic Server Structure
The server configuration is defined using a module export function that receives core dependencies:
module.exports = (fs, express, addFunction, functionMatch, addWsFunction, get, set, update, remove, query, generateRequestId, data, addDomain, startDB, onConnection, onClose, basePath) => {
// Server configuration goes here
}
Domain Configuration
NukeBase supports multiple domains with custom SSL certificates using a single server/database instance or multiple servers/databses using nginx as a reverse proxy.
const productionDomain = addDomain({
domain: "example.com",
https: {
key: '/etc/letsencrypt/live/example.com/privkey.pem',
cert: '/etc/letsencrypt/live/example.com/fullchain.pem'
},
port: 3000,
authPath: "users" // Path where user authentication data is stored
});
// You can add as many domains as needed
const anotherDomain = addDomain({
domain: "another-domain.com",
https: {
key: '/etc/letsencrypt/live/another-domain.com/privkey.pem',
cert: '/etc/letsencrypt/live/another-domain.com/fullchain.pem'
},
port: 3001,
authPath: "users" // Path where user authentication data is stored
});
For localhost "127.0.0.1" development environments, you can use empty strings for the SSL credentials:
const devDomain = addDomain({
domain: "exampledomain.com",
https: {
key: '',
cert: ''
},
authPath: "users" // Path where user authentication data is stored
});
Authentication Path Configuration
The authPath
parameter tells NukeBase where to find user authentication data in your database. When specified, NukeBase automatically handles authentication, including:
- Token validation for WebSocket connections
- Built-in authentication endpoints (
/auth
,/logout
,/changepassword
) - Automatic population of the
admin
object in security rules
Express Middleware
Each domain has its own Express app instance that you can configure:
// Serve static files
myDomain.app.use(express.static(path.join(basePath, `../public`)));
// Serve files with long cache time
myDomain.app.use('/files', express.static(path.join(basePath, `../files`), {
maxAge: 60 * 60 * 24 * 1000 * 365 // 1 year in milliseconds
}));
Storage Configuration
NukeBase provides built-in support for S3-compatible storage buckets (AWS S3, DigitalOcean Spaces, MinIO, etc.). Configure storage to handle file uploads, downloads, and automatic file management with security rules.
S3 Storage Setup
Use the s3Config()
function to configure S3-compatible storage for your domain:
// Configure S3 storage for your domain
const storage = s3Config({
expressApp: myDomain,
endpoint: 'nyc3.digitaloceanspaces.com', // Or your S3 endpoint
accessKeyId: 'your-access-key',
secretAccessKey: 'your-secret-key',
signatureVersion: 'v4',
s3ForcePathStyle: false,
bucketName: 'your-bucket-name'
});
// The storage object provides upload and delete functions
const { uploadS3, deleteS3 } = storage;
Configuration Parameters:
- expressApp - Your domain object (e.g.,
myDomain
) - endpoint - S3 endpoint URL (AWS: s3.amazonaws.com, DO: nyc3.digitaloceanspaces.com)
- accessKeyId - Your S3 access key ID
- secretAccessKey - Your S3 secret access key
- signatureVersion - Signature version (typically 'v4')
- s3ForcePathStyle - Force path-style URLs (false for most providers)
- bucketName - Your storage bucket name
Automatic File Endpoints
When storage is configured, NukeBase automatically provides file management endpoints:
Built-in File Endpoints:
- GET /files/* - Download files with automatic security rule validation
- POST /get-upload-url - Get signed upload URLs for direct client uploads
Client-Side File Upload
Use the built-in endpoints to handle file uploads from your client:
// Upload a file using the built-in endpoints
async function uploadFile(file, path) {
// 1. Get upload URL from server
const uploadResponse = await fetch('/get-upload-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
credentials: 'same-origin',
body: JSON.stringify({
filename: file.name,
contentType: file.type,
path: path // e.g., 'users/profile-pictures'
})
});
if (!uploadResponse.ok) {
throw new Error('Failed to get upload URL');
}
const { uploadUrl, filename } = await uploadResponse.json();
// 2. Upload directly to S3 using signed URL
const uploadResult = await fetch(uploadUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type
}
});
if (uploadResult.ok) {
console.log('File uploaded successfully:', filename);
return filename; // Returns the full S3 key path
} else {
throw new Error('Upload failed');
}
}
Server-Side File Operations
Use the returned functions for server-side file operations in your triggers or WebSocket functions:
// Upload file from server (in triggers or WebSocket functions)
async function serverUpload(fileBuffer, path, originalName) {
const fileUrl = await uploadS3(fileBuffer, path, originalName);
console.log('File uploaded to:', fileUrl);
return fileUrl;
}
// Delete file from storage
async function deleteFile(fileKey) {
const success = await deleteS3(fileKey);
if (success) {
console.log('File deleted successfully');
}
return success;
}
// Example: Clean up old files in a trigger
addFunction("onUpdate", "users.$userId.profilePicture", async function(context) {
// If user uploads new profile picture, delete the old one
if (context.dataBefore && context.dataAfter) {
const oldPicture = context.dataBefore;
const newPicture = context.dataAfter;
if (oldPicture !== newPicture && oldPicture.includes('/')) {
// Extract file key from URL and delete old file
const fileKey = oldPicture.split('/').pop();
await deleteS3(`profile-pictures/${fileKey}`);
}
}
});
File Security Rules
File access is automatically controlled by security rules. Create rules for file paths to control who can upload and download files:
// In your rules.js - file paths are converted to dot notation
module.exports = {
"files": {
"users": {
"profile-pictures": {
// Anyone can read profile pictures
"readFile": "true",
// Only authenticated users can upload
"writeFile": "admin.uid != null"
},
"$userId": {
"private": {
// Only the user can access their private files
"readFile": "admin.uid == $userId",
"writeFile": "admin.uid == $userId"
}
}
},
"public": {
// Public files accessible to everyone
"readFile": "true",
"writeFile": "admin.role == 'admin'"
}
}
};
Important Notes:
- File paths like
users/profile-pictures/image.jpg
becomeusers.profile-pictures
in security rules - Use
readFile
andwriteFile
rules (notread
/write
) for file operations - The system automatically stores file references in your database at the specified path
- Files are automatically checked for upload completion and database references are updated
Security Rules
NukeBase uses a JSON-based security rules system to control access to your database. Rules are defined in
server/rules.js
and are evaluated for every database operation.
Rule Structure
Rules are defined as a JSON object that mirrors your database structure. Six types of rules control different aspects of data access:
- read - Controls who can read data at a path (triggered by
get()
operations) - write - Controls who can create, update, or delete data (triggered by
set()
,update()
, andremove()
operations) - validate - Ensures data meets specific requirements (triggered by
set()
andupdate()
operations) - query - Controls which items can be returned in query results (triggered by
query()
operations) - readFile - Controls who can download files from storage (triggered by file download requests)
- writeFile - Controls who can upload files to storage (triggered by file upload requests)
Rules cascade down the data tree, with rules higher up in the tree overriding deeper rules.
Important Rule Matching Behavior:
- Read/Write rules: Multiple rules can match at the SAME depth level. If you have rules
for both
pets
and$other
at the same level, BOTH rules apply to a path likepets
. Rules higher in the tree override deeper rules. - Validate rules: Only ONE validate rule matches per path. If you have validate rules
for both
pets
and$other
at the same level, the pathpets
will ONLY match thepets
rule, not$other
.
// Example demonstrating same-depth rule matching
"pets": {
"read": "true", // This rule applies to 'pets'
"write": "admin.role == 'petOwner'", // This rule applies to 'pets'
"validate": "newData.type == 'cat' || newData.type == 'dog'" // ONLY this rule applies
},
"$other": {
"read": "admin.role == 'admin'", // This ALSO applies to 'pets' (both match)
"write": "false", // This ALSO applies to 'pets' (both match)
"validate": "newData != null" // This does NOT apply to 'pets' (only most specific)
}
Basic Example
module.exports = {
"users": {
"$userId": {
"read": "true", // Anyone can read user profiles
"write": "admin.uid == $userId", // Only the user can edit their profile
"email": {
"read": "admin.uid == $userId" // Email is private
}
}
}
};
Path Patterns
Rules support different path patterns to match your data structure. Note that exact path matching works the
same for both objects and arrays - you can specify exact paths like users.john.hobbies
regardless of whether hobbies
is an object or array.
Pattern | Description | Example |
---|---|---|
users.john |
Exact path matching | Matches only users.john (works for both objects and arrays) |
users.$userId |
Wildcard matching | Matches users.alice, users.bob, etc. |
posts[] |
Array element validation | Validates each element in the posts array |
posts |
Array itself validation | Validates the entire posts array |
tags.colors |
Exact match for arrays | Matches the specific array at tags.colors |
Critical Array Concept: Understanding the difference between path[]
and
path
is essential for proper validation:
favColors[]
- Rules apply to EACH individual element in the array (newData = single element)favColors
- Rules apply to the ENTIRE array as a whole (newData = complete array)- To validate both element type AND array size, use
path[]
withnewRoot
to check the final array length
Operations and Their Rules
Different database operations trigger different combinations of rules:
Operation | Rules Triggered | Description |
---|---|---|
get() |
read | Only read rules are checked when retrieving data |
set() |
write + validate | Both write permission and data validation are required |
update() |
write + validate | Same as set() - must have permission and valid data |
remove() |
write | Only write rules are checked (newData is null) |
query() |
query | Query rules filter which items are returned |
File Upload | writeFile | File upload permissions are checked before allowing file storage |
File Download | readFile | File access permissions are checked before serving files |
Rule Types in Detail
Read Rules
Control who can read data at a specific path:
// Simple read rule
"posts": {
"read": "true", // Anyone can read posts
"$postId": {
"draft": {
"read": "admin.uid == data.authorId" // Only author can read drafts
}
}
}
// Using variables in paths
"users": {
"$userId": {
"read": "true", // Anyone can read user profiles
"email": {
"read": "admin.uid == $userId" // Only the user can read their own email
}
}
}
Write Rules
Control who can create, update, or delete data:
// Basic write rule
"posts": {
"$postId": {
"write": "admin.uid == data.authorId", // Only author can edit
"createdAt": {
"write": "!data" // Can only set createdAt when creating (no previous data)
}
}
}
// Demonstrating rule override hierarchy
"store": {
"write": "false", // No one can write to store (overrides all child rules)
"products": {
"write": "admin.role == 'manager'", // This is ignored due to parent rule
"$productId": {
"write": "admin.uid == data.ownerId" // This is also ignored
}
}
}
Validate Rules
Ensure data integrity and format requirements:
// Simple field validation
"users": {
"$userId": {
"age": {
"validate": "newData >= 13 && newData <= 120"
},
"email": {
"validate": "newData.includes('@') && newData.includes('.')"
}
}
}
// IMPORTANT: Array validation has two parts
// 1. Use path[] to validate EACH element in the array
// 2. Use path to validate the ENTIRE array
// Validating array elements (path[])
"users": {
"$userId": {
"favColors[]": {
// This runs for EACH element being added/modified
// newData here refers to the individual element
"validate": "typeof newData === 'string'"
}
}
}
// Validating the entire array (path)
"users": {
"$userId": {
"favColors": {
// This validates the ENTIRE array as a whole
// newData here refers to the complete array
"validate": "newData.length <= 10"
}
}
}
// COMBINING both: validate elements AND array size
"users": {
"$userId": {
"favColors[]": {
// Validate each element is a string AND check total array size
// Use newRoot to access the complete array.length before/after changes
"validate": "typeof newData === 'string' && newRoot.users[$userId].favColors.length <= 10"
}
}
}
// Ensuring required fields
"posts": {
"$postId": {
"validate": "newData.title && newData.content && newData.title.length <= 200"
}
}
Query Rules
Control which items can be returned in query results:
// Filter products by price for free users
"products": {
"query": "child.price <= 100 || admin.role == 'premium'" // Free users only see cheap products
}
// Private messaging system
"messages": {
"query": "child.to == admin.uid || child.from == admin.uid" // Only see your messages
}
// Show only published posts or user's own drafts
"posts": {
"query": "child.published == true || child.authorId == admin.uid"
}
File Security Rules
Control file upload and download permissions using specialized file rules:
// File paths are converted to dot notation in rules
"files": {
"users": {
"profile-pictures": {
// Anyone can read profile pictures
"readFile": "true",
// Only authenticated users can upload
"writeFile": "admin.uid != null"
},
"$userId": {
"private": {
// Only the user can access their private files
"readFile": "admin.uid == $userId",
"writeFile": "admin.uid == $userId"
}
}
},
"public": {
// Public files accessible to everyone
"readFile": "true",
"writeFile": "admin.role == 'admin'"
}
}
File Rules Notes:
- File paths like
users/profile-pictures/image.jpg
becomeusers.profile-pictures
in security rules - Use
readFile
andwriteFile
rules (not read/write) for file operations - The system automatically stores file references in your database at the specified path
- Files are automatically checked for upload completion and database references are updated
Available Variables
Rules have access to several context variables:
Variable | Description | Available In |
---|---|---|
data |
Current value at the path (before changes) | All rule types |
newData |
Value after the write operation | write, validate |
root |
Current database root | All rule types |
newRoot |
Database root after the write | write, validate |
admin |
Authentication object with user info | All rule types |
$variables |
Values from wildcard path segments | All rule types |
child |
Individual item in a query | query only |
Best Practices
- Start with restrictive rules, then add exceptions as needed
- Use validate rules to ensure data integrity
- Test rules thoroughly before deploying to production
- Keep rules simple and readable
- Critical for arrays: use
path[]
to validate EACH array element,path
to validate the ENTIRE array - Only one validate rule per path - combine conditions with
&&
or||
- Remember that multiple read/write rules can match at the same depth, but higher rules override deeper ones
- Validate rules only match the most specific rule at a given path
Common Mistakes to Avoid
Mistake 1: Confusing array and element validation
// WRONG - Tries to check array length on each string element
"tags[]": {
"validate": "newData.length <= 5" // newData is a string, not the array!
}
// CORRECT - Check element type AND array length
"tags[]": {
"validate": "typeof newData === 'string' && newRoot.users[$userId].tags.length <= 5"
}
Mistake 2: Multiple validate rules on same path
// WRONG - Only the last validate rule will be used!
"email": {
"validate": "newData.includes('@')",
"validate": "newData.includes('.')" // This overwrites the first rule!
}
// CORRECT - Combine with &&
"email": {
"validate": "newData.includes('@') && newData.includes('.')"
}
Domain-Database Architecture
NukeBase's architecture is built on WebSocket connections, which creates a unique relationship between domains and databases. Understanding this architecture is essential for designing your multi-domain applications.
Key Concept: How you configure startDB()
determines whether multiple domains
share a single database or each domain has its own isolated database.
root/allDB/
├── ecosystem.config.js # PM2 configuration for managing all apps
├── setup.js # One command for setting up NukeBase
├── db1/
│ ├── app1/ # Static files for app1
│ ├── server/
│ │ ├── app.js # Domain configuration for all apps in db1
│ │ ├── database.exe
│ │ └── data.json # Database for for all apps in db1
│ └── ...
│
├── db2/
│ ├── app2/ # Static files for app2
│ ├── app3/ # Static files for app3
│ ├── server/
│ │ ├── app.js # Domain configuration for all apps in db2
│ │ ├── database.exe
│ │ └── data.json # Separate database for all apps in db2
│ └── ...
Understanding the File Structure:
- root/allDB/ - The parent directory containing all your databases
- ecosystem.config.js - PM2 configuration file for managing multiple databases
- setup.js - Autoconfig ssh keys, package.json, install nodejs and nginx on vps
- db1/, db2/ - Individual database folders, each with its own code
- app1/, app2/, app3/ - Static files like HTML, CSS, and client-side JavaScript
- server/ - Server-side code and database files
- app.js - Main configuration file for domain settings and server initialization
- data.json - The actual database file that stores your application data
Database Configuration Files
Each application has its own app.js
file that configures domain settings and initializes the
server.
For local development, each application uses a different port.
const woodworker = addDomain({
domain: "woodworker.com",
port: 3001, // First app on port 3001
authPath: "users" // Path where user authentication data is stored
});
// Start the server - only one startDB() call per app.js file
startDB({ deploy: "http", http: "127.0.0.1" });
const burgerCA = addDomain({
domain: "burgerCA.com",
port: 3002, // Second app on port 3002
authPath: "users" // Path where user authentication data is stored
});
const burgerAZ = addDomain({
domain: "burgerAZ.com",
port: 3003, // Second app on port 3003
authPath: "users" // Path where user authentication data is stored
});
// Start the server - in its own process
startDB({ deploy: "http", http: "127.0.0.1" });
Important: You can only call startDB()
once in each app.js file. But you can
add multiple addDomain() in a single app.js file. For multiple
databases, you need separate folders with their own app.js files, each running as an independent process.
Process Management with PM2
To manage multiple databases efficiently, NukeBase works well with process managers like PM2. The ecosystem.config.js file helps you start, stop, and monitor all your databases with simple commands.
// ecosystem.config.js
module.exports = {
apps: [
{
name: 'db1',
script: 'db1/server/database.js',
autorestart: true,
watch: ['db1/server/app.js', "db1/public"],
ignore_watch: ['db1/server/data.json'],
},
{
name: 'db2',
script: 'db2/server/database.js',
autorestart: true,
watch: ['db2/server/app.js', "db2/public"],
ignore_watch: ['db2/server/data.json'],
}
],
};
Process Management: Start all your databases with a single command:
pm2 start ecosystem.config.js
.
PM2 will automatically monitor your applications, restart them if they crash, and can even reload them when
code changes.
Local Development Architecture
In development mode, each application connects to its own database through different ports on localhost:
┌────────────────┐ ┌────────────────┐
│ 127.0.0.1:3001 │ │ 127.0.0.1:3002 │
└───────┬────────┘ │ 127.0.0.1:3003 │
│ └────────┬───────┘
│ WebSocket │
│ Connections │
▼ ▼
┌────────────────┐ ┌────────────────┐
│ Database #1 │ │ Database #2 │
│ (db1 folder) │ │ (db2 folder) │
└────────────────┘ └────────────────┘
Development Tip: Access each application by its port number in your browser: http://127.0.0.1:3001 for db1, http://127.0.0.1:3002 for db2, etc. You can run multiple applications simultaneously to test multi-tenant scenarios.
Single Database Instance (Node.js HTTPS)
For simpler production scenarios or when all domains need to share data, use the deploy: "https"
option.
This creates a single database instance that serves multiple domains through Node.js's built-in HTTPS module.
// Multiple domains with a single shared database
startDB({ deploy: "https", http: "127.0.0.1" });
Key Features of HTTPS Mode:
- Uses Node.js built-in HTTPS/TLS modules to handle secure connections
- Automatically routes traffic based on domain names
- Multiple domains connect to a single database instance
- All domains share the same data
- Changes made through one domain are immediately visible on all domains
- Single CPU Core: This mode is limited to using only one CPU core
- Single Port: All domains must share port 443 (HTTPS standard port)
┌──────────────┐ ┌──────────────┐
│ example.com │ │ example2.com │
└──────┬───────┘ └───────┬──────┘
│ │
│ HTTPS Connections │
│ (Port 443) │
▼ ▼
┌─────────────────────────────────┐
│ │
│ Single Database (Single Core) │
│ │
└─────────────────────────────────┘
Performance Limitation: The deploy: "https"
option uses Node.js's built-in
HTTPS module, which runs on a single CPU core. This means that regardless of how many CPU cores your server
has, NukeBase can only utilize one core for processing all requests. This can become a bottleneck for
high-traffic applications.
Best use cases for deploy: "https"
:
- Single applications that need multiple domain access to the same data
- Small to medium traffic applications
- When simplicity of setup is more important than maximum performance
- Development or staging environments
Multiple Database Instances (Nginx)
For high-performance, multi-tenant production applications, the deploy: "nginx"
option provides
the best scalability. This approach uses Nginx as a reverse proxy to route traffic to multiple independent
database instances, each potentially running on its own CPU core.
// Multiple domains with separate isolated databases
startDB({ deploy: "nginx", http: "127.0.0.1" });
Key Features of Nginx Mode:
- Uses Nginx as a reverse proxy to route traffic based on domain names
- Each domain connects to its own dedicated database instance running on a unique port
- Data is completely isolated between domains
- Changes on one domain do not affect other domains
- Multi-Core Support: Each database instance can potentially run on different CPU cores
- Port Isolation: Nginx handles the routing, so port conflicts are avoided
- Automatically generates Nginx configuration files for each domain
┌─────────┐
│ Nginx │
└────┬────┘
│ (Reverse Proxy)
┌─────────────┴─────────────┐
│ │
┌────────▼────────┐ ┌─────────▼───────┐
│ example.com │ │ example2.com │
│→ 127.0.0.1:3001 │ │→ 127.0.0.1:3002 │
└────────┬────────┘ └─────────┬───────┘
│ │
┌────────▼────────┐ ┌─────────▼───────┐
│ Database #1 │ │ Database #2 │
│ (Separate Core) │ │ (Separate Core) │
└─────────────────┘ └─────────────────┘
Performance Advantage: The deploy: "nginx"
option allows you to utilize
multiple CPU cores by running different NukeBase instances for each domain. Nginx efficiently routes traffic
to the correct instance, which is ideal for high-traffic or resource-intensive applications.
Best use cases for deploy: "nginx"
:
- Multi-tenant databases requiring data isolation
- High-traffic applications that need to utilize multiple CPU cores
- When you need to host multiple separate databses on a single VPS
- Production environments where performance scalability is important
Consistency Between Development and Production: You can use the same folder structure for
both local development (deploy: "http"
) and production (deploy: "nginx"
), making
it easier to maintain consistency across environments. Each folder represents an independent application
with its own database.
Recommendation: For most production applications on a single VPS, use
deploy: "nginx"
. It provides better performance, scalability, and flexibility, even if you're
only hosting a single application currently. For local development, use deploy: "http"
with the
same folder structure to mirror your production environment.
Database Triggers
Create event-driven functions that respond to database changes:
// Create a trigger for when a request is updated
addFunction("onUpdate", "requests.$requestId", async function(context) {
// The context object contains all relevant information about the change
const beforeNotes = context.dataBefore?.notes;
const afterNotes = context.dataAfter?.notes;
// Replace "pizza" with pizza emoji
const newNotes = afterNotes.replaceAll("pizza", "🍕");
// Avoid infinite loop by checking if we already replaced
if (newNotes === afterNotes) {
return;
}
// Update the data with our modified version
update(context.path, { notes: newNotes });
});
Key components of database triggers:
addFunction(eventType, pathPattern, callbackFunction)
Event Types
"onSet"
- Triggered when data is created or completely replaced"onUpdate"
- Triggered when data is partially updated"onRemove"
- Triggered when data is deleted"onValue"
- Triggered for all changes (set, update, remove)
Path Patterns
Use a path string with wildcards to match specific data paths:
"users.$userId"
- Matches any user path like "users.john" or "users.alice""posts.$postId.comments.$commentId"
- Matches any comment on any post
Context Object
Your callback function receives a context object containing:
context.path
- The complete path that was changedcontext.dataAfter
- The data after the change (null for remove operations)context.dataBefore
- The data before the change (null for new data)
Important: When modifying data within a trigger that affects the same path you're watching, always implement safeguards to prevent infinite loops, as shown in the example.
Complete Example: Order Processing
// React to new orders being created
addFunction("onSet", "orders.$orderId", async function(context) {
// Only run if this is a new order (no previous data)
if (!context.dataBefore && context.dataAfter) {
// Extract orderId from the path
const orderId = context.path.split('.')[1];
// Update the order status
await update(context.path, {
status: "processing",
processingStart: Date.now()
});
}
});
WebSocket Functions
Create custom server functions that clients can call through wsFunction:
addWsFunction("getUsersCount", async function (data, admin, sessionId) {
//get all users
var res = await get("users")
//Count how many users
count = Object.keys(res.data).length
//return number
return count
});
WebSocket functions receive:
- Client-sent data
- Admin flag (for protected operations)
- User's session ID
Connection Events
Track client connections and disconnections:
// When a client connects
onConnection(function (admin, sessionId, req) {
// Record session start time
update(`sessions.${admin.uid}.${sessionId}`, {
start: Date.now()
});
});
// When a client disconnects
onClose(function (admin, sessionId, req) {
// Record session end time
update(`sessions.${admin.uid}.${sessionId}`, {
end: Date.now()
});
});
Starting the Database
Start the NukeBase server with configuration options by calling startDB() once at the end of your configuration:
// Local IP Address mode (no SSL)
startDB({ deploy: "http", http: "127.0.0.1"});
// Server IP Address mode (no SSL)
startDB({ deploy: "http", http: "126.23.45.1"});
// Multiple https domains with a single database instance
startDB({ deploy: "https", http: "0.0.0.0"});
// Multiple https domains with multiple database instances if you have multiple app folders
startDB({ deploy: "nginx", http: "127.0.0.1"});
Configuration options:
- deploy: String - use IP address provided
- When "http": Use HTTP without SSL, binding to the specified IP address
- When "https": Use HTTPS with the configured SSL certificates
- When "nginx": Use nginx as reverse proxy and route all domain https traffic to each app server via 127.0.0.1 this setting also automatically creates the nginx root/etc/nginx/sites-enabled/example.com.config files for each domain so you can create multiple nukebase folders and each app folder will have its own database.
- http: String - the IP address to bind to
- Use "127.0.0.1" to accept connections only from the local machine
- Use a specific IP address like "126.23.45.1" to bind to that server address
- Use "0.0.0.0" to accept connections from any IP
Complete Server Example
Here's a minimal but complete server setup:
module.exports = (fs, express, addFunction, addWsFunction, get, set, update, remove, query, generateRequestId, data, addDomain, startDB, onConnection, onClose, basePath) => {
// Set up a domain
const myApp = addDomain({
domain: "myapp.com",
https: {
key: '/etc/letsencrypt/live/myapp.com/privkey.pem',
cert: '/etc/letsencrypt/live/myapp.com/fullchain.pem'
},
port: 3000,
authPath: "users" // Path where user authentication data is stored
});
// Configure middleware for serving static files
const path = require('path');
myApp.app.use(express.static(path.join(basePath, 'public')));
// Add a database trigger for important changes
addFunction("onValue", "orders.$orderId", async function(context) {
// Only trigger if data has actually changed
if (JSON.stringify(context.dataAfter) !== JSON.stringify(context.dataBefore)) {
await set(`logs.${generateRequestId()}`, {
path: context.path,
timestamp: Date.now(),
oldValue: context.dataBefore,
newValue: context.dataAfter,
change: "Important data changed"
});
}
});
// Add a WebSocket function for client calculations
addWsFunction("addNumbers", function(data, admin, sessionId) {
// Extract numbers from the request
const { num1, num2 } = data;
// Perform the calculation on the server
const sum = num1 + num2;
// Return the result to the client
return sum;
});
// Track user connections
onConnection(function(admin, sessionId, req) {
// Record when user connects
update(`sessions.${admin.uid}.${sessionId}`, {
start: Date.now(),
userAgent: data.request?.headers?.["user-agent"] || "Unknown"
});
// Update user status
update(`users.${admin.uid}`, {
online: true,
lastSeen: Date.now()
});
});
// Handle user disconnections
onClose(function(admin, sessionId, req) {
// Record when user disconnects
update(`sessions.${admin.uid}.${sessionId}`, {
end: Date.now(),
duration: function(current) {
return current.end - current.start;
}
});
// Update user status
update(`users.${admin.uid}`, {
online: false,
lastSeen: Date.now()
});
});
startDB({ deploy: "http", http: "127.0.0.1"});
console.log(🚀 NukeBase server running on Node.js "http://127.0.0.1:3000");
};
Note: This example demonstrates best practices including:
- Domain setup with SSL configuration
- Static file serving
- Real-time database triggers
- Custom WebSocket functions
- Connection tracking
- Server initialization with proper port configuration
Complete Client NukeBase SDK/connectWebSocket()
Here's a minimal but complete client setup:
var pendingRequests = {};
var socket;
const urlParams = new URLSearchParams(window.location.search);
const admin = urlParams.get('admin');
function generateRequestId() {
const chars = '0123456789ABCDEFGHJKLMNPQRSTUVWXYZ';
let result = '';
for (let i = 0; i < 8; i++) {
result += chars.charAt(Math.floor(Math.random() * chars.length));
}
return result;
}
function sendRequest(action, path, socket, data) {
return new Promise((resolve, reject) => {
const requestId = generateRequestId();
pendingRequests[requestId] = { resolve, reject };
socket.send(JSON.stringify({ action, path, requestId, data, admin }));
});
}
function sendSubscribe(action, path, socket, data) {
socket.send(JSON.stringify({ action, path, data, admin}));
}
const sub = {
events: {},
on(action, path, callback, data) {
this.events[action + path] = callback;
sendSubscribe(action, path, socket, data);
},
emit(action, path, data) {
if (this.events[action + path]) {
this.events[action + path](data);
}
},
off(action, path, callback, data) {
sendSubscribe(action + 'Stop', path, socket, data)
delete this.events[action + path];
}
};
function connectWebSocket() {
return new Promise((resolve, reject) => {
const wsProtocol = window.location.protocol === "https:" ? "wss://" : "ws://";
const url = window.location;
socket = new WebSocket(`${wsProtocol}${url.host}${url.pathname}${url.search}`);
//socket = new WebSocket(`ws://127.0.0.1:3000`);
socket.addEventListener('message', function (event) {
try {
const response = JSON.parse(event.data);
if (response.requestId) {
pendingRequests[response.requestId].resolve(response);
delete pendingRequests[response.requestId];
} else {
sub.emit(response.action, response.path, response);
}
} catch (error) {
console.error('Error parsing message:', error);
}
});
socket.addEventListener('open', async () => {
console.log('WebSocket connection opened');
resolve();
});
socket.addEventListener('close', () => {
console.log('WebSocket connection closed');
});
socket.addEventListener('error', (error) => {
console.error('WebSocket error:', error);
});
});
}
//Reconnect WebSocket when the browser window gains focus
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'visible') {
setTimeout(function () {
if (socket && socket.readyState === WebSocket.CLOSED) {
connectWebSocket();
}
}, 500);
}
});
function set(path, data) {
return sendRequest("set", path, socket, data).then(data => {
return data;
})
}
function get(path) {
return sendRequest('get', path, socket).then(data => {
return data;
})
}
function update(path, data) {
return sendRequest("update", path, socket, data).then(data => {
return data;
})
}
function remove(path) {
return sendRequest("remove", path, socket).then(data => {
return data;
})
}
function query(path, query) {
return sendRequest('query', path, socket, query).then(data => {
return data;
})
}
function wsFunction(path, data) {
return sendRequest('wsFunction', path, socket, data).then(data => {
return data;
})
}
function getSub(path, handler) {
sub.on("getSub", path, handler);
return function () {
sub.off('getSub', path);
};
}
function querySub(path, query, handler) {
sub.on("querySub", path, handler, query);
return function () {
sub.off("querySub", path, handler, query);
}
}
function getSubChanged(path, handler) {
sub.on("getSubChanged", path, handler);
return function () {
sub.off('getSubChanged', path);
};
}
function querySubChanged(path, query, handler) {
sub.on("querySubChanged", path, handler, query);
return function () {
sub.off("querySubChanged", path, handler, query);
}
}
function setFile(path, file) {
return new Promise((resolve, reject) => {
const requestId = generateRequestId();
pendingRequests[requestId] = { resolve, reject };
const separator = "--myUniqueSeparator--";
const reader = new FileReader();
reader.onload = function (e) {
const arrayBuffer = e.target.result;
const textEncoder = new TextEncoder();
const encodedPath = textEncoder.encode(path + separator);
const encodedFileName = textEncoder.encode(file.name + separator);
const encodedRequestId = textEncoder.encode(requestId + separator);
const combinedArrayBuffer = new Uint8Array(
encodedPath.length + encodedFileName.length + encodedRequestId.length + arrayBuffer.byteLength
);
combinedArrayBuffer.set(encodedPath, 0);
combinedArrayBuffer.set(encodedFileName, encodedPath.length);
combinedArrayBuffer.set(encodedRequestId, encodedPath.length + encodedFileName.length);
combinedArrayBuffer.set(
new Uint8Array(arrayBuffer), encodedPath.length + encodedFileName.length + encodedRequestId.length
);
socket.send(combinedArrayBuffer.buffer);
};
reader.readAsArrayBuffer(file);
});
}
connectWebSocket().then(() => {
// Example usage
set("users.matt.color", "red").then(data => {
console.log(data);
})
get("sessions").then(data => {
console.log(data);
})
update("users.matt", { leadsSent: "Pending" }).then(data => {
console.log(data);
})
update("users.matt.count", 5).then(data => {
console.log(data);
})
remove("users.matt").then(data => {
console.log(data);
})
query("sessions", `child.count > 0`).then(data => {
console.log(data);
})
wsFunction("custom1", 23).then(data => {
console.log(data);
})
//all subscriptions use value, get, update, remove. This subscribes to that action
getSub("value@sessions", data => {
console.log(data);
});
querySub("value@sessions", "child.count == 4", data => {
console.log(data);
});
getSubChanged("value@sessions", data => {
console.log(data);
});
querySubChanged("value@sessions", "child.count != 4", data => {
console.log(data);
});
setFile(undefined, blob).then(data => {
console.log(data);
})
});