Skip to content

Conversation

@mtrezza
Copy link
Member

@mtrezza mtrezza commented Nov 16, 2025

Pull Request

Issue

Various optimizations.

Summary by CodeRabbit

  • New Features

    • Added database latency simulation for performance benchmarks to enable realistic testing scenarios
    • Introduced new benchmark for query operations with multi-level includes
  • Chores

    • Enhanced benchmark measurement infrastructure with improved logging and structured progress reporting
    • Refactored performance measurement configuration with centralized parameter handling

@parse-github-assistant
Copy link

parse-github-assistant bot commented Nov 16, 2025

🚀 Thanks for opening this pull request!

@coderabbitai
Copy link

coderabbitai bot commented Nov 16, 2025

Warning

Rate limit exceeded

@mtrezza has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 13 minutes and 41 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 3ac7987 and 10fcd9f.

📒 Files selected for processing (1)
  • benchmark/performance.js (11 hunks)
📝 Walkthrough

Walkthrough

Introduces MongoLatencyWrapper.js to inject artificial latency into MongoDB operations by wrapping Collection methods. Refactors benchmark/performance.js to change measureOperation to accept configuration objects, adds dbLatency support with latency wrapper integration, improves logging infrastructure, restructures benchmark execution flow, and introduces a new benchmark for Query with Include.

Changes

Cohort / File(s) Summary
Latency Injection Utility
benchmark/MongoLatencyWrapper.js
New file providing wrapMongoDBWithLatency(latencyMs) function that wraps MongoDB Collection methods (find, aggregate, insertOne, updateOne, deleteOne, etc.) to inject artificial delays into operations. Handles cursor-returning, promise-returning, and synchronous methods. Returns unwrap function to restore originals.
Benchmark Infrastructure
benchmark/performance.js
Refactored measureOperation signature from positional parameters to options object ({ name, operation, iterations, skipWarmup, dbLatency }). Added dbLatency support with automatic latency wrapper injection/cleanup. Introduced logging helpers (logInfo, logError) replacing console logs. Restructured benchmark execution to iterate over benchmarks array with Parse server reset and database cleanup. Enhanced progress reporting with percentages and optional per-iteration logging. Added new "Query with Include (2 levels)" benchmark. Standardized iteration counts to explicit numeric literals (e.g., 1_000).

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Benchmark as Benchmark Code
    participant Wrapper as Latency Wrapper
    participant MongoDB as MongoDB Collection
    participant Delay as Delay Timer
    
    Benchmark->>Wrapper: call collection.find()
    Wrapper->>MongoDB: forward find()
    MongoDB-->>Wrapper: return cursor
    Wrapper->>Wrapper: wrap cursor.toArray()
    Wrapper-->>Benchmark: return wrapped cursor
    
    Benchmark->>Wrapper: call cursor.toArray()
    Wrapper->>MongoDB: toArray()
    MongoDB-->>Wrapper: return results
    Wrapper->>Delay: sleep(latencyMs)
    Delay-->>Wrapper: done
    Wrapper-->>Benchmark: return results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • API change: measureOperation signature refactored to options object—requires verification of all call sites and backward compatibility implications
  • Latency wrapper logic: New wrapping mechanism in MongoLatencyWrapper.js handles multiple method patterns (cursors, promises, synchronous)—ensure correctness of delay injection and unwrap restoration
  • Benchmark flow restructuring: Significant changes to execution order, database reset/cleanup logic, and logging integration—verify correctness of reset/cleanup sequence
  • Integration points: dbLatency injection and cleanup must be verified to work correctly within measureOperation and not leak state between benchmarks

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description only contains the template text and states 'Various optimizations' without filling in required sections like Issue link or a meaningful Approach description. Complete the PR description by linking to a related issue and providing a detailed explanation of the changes made (MongoLatencyWrapper addition, benchmark refactoring, and new Query with Include benchmark).
Title check ❓ Inconclusive The title 'ci: Fix performance step in CI' is vague and does not clearly convey what the actual changes are (adding MongoLatencyWrapper and refactoring benchmark performance measurement). Revise the title to be more specific about the primary change, such as 'Add MongoDB latency injection and refactor benchmark measurement' or similar.
✅ Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@parseplatformorg
Copy link
Contributor

parseplatformorg commented Nov 16, 2025

Snyk checks have passed. No issues have been found so far.

Status Scanner Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

@codecov
Copy link

codecov bot commented Nov 16, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.04%. Comparing base (133660f) to head (10fcd9f).
⚠️ Report is 3 commits behind head on alpha.

❗ There is a different number of reports uploaded between BASE (133660f) and HEAD (10fcd9f). Click for more details.

HEAD has 10 uploads less than BASE
Flag BASE (133660f) HEAD (10fcd9f)
13 3
Additional details and impacted files
@@            Coverage Diff             @@
##            alpha    #9931      +/-   ##
==========================================
- Coverage   93.07%   84.04%   -9.04%     
==========================================
  Files         187      187              
  Lines       15224    15224              
  Branches      177      177              
==========================================
- Hits        14170    12795    -1375     
- Misses       1042     2413    +1371     
- Partials       12       16       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@mtrezza mtrezza merged commit 52f7c89 into parse-community:alpha Nov 17, 2025
12 of 24 checks passed
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
benchmark/performance.js (1)

24-25: Consider making LOG_ITERATIONS configurable.

LOG_ITERATIONS is hardcoded to false, but it could be useful for debugging to enable per-iteration logging. Consider making it configurable via an environment variable, similar to BENCHMARK_ITERATIONS.

Apply this diff:

 const ITERATIONS = process.env.BENCHMARK_ITERATIONS ? parseInt(process.env.BENCHMARK_ITERATIONS, 10) : undefined;
-const LOG_ITERATIONS = false;
+const LOG_ITERATIONS = process.env.LOG_ITERATIONS === 'true';
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 133660f and 3ac7987.

📒 Files selected for processing (2)
  • benchmark/MongoLatencyWrapper.js (1 hunks)
  • benchmark/performance.js (11 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-11-08T13:46:04.940Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.940Z
Learning: For new Parse Server options, verify that the option is documented in src/Options/index.js and that npm run definitions has been executed to reflect changes in src/Options/docs.js and src/Options/Definitions.js. README.md documentation is a bonus but not required for new options.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-04-30T19:31:35.344Z
Learnt from: RahulLanjewar93
Repo: parse-community/parse-server PR: 9744
File: spec/ParseLiveQuery.spec.js:0-0
Timestamp: 2025-04-30T19:31:35.344Z
Learning: In the Parse Server codebase, the functions in QueryTools.js are typically tested through end-to-end behavior tests rather than direct unit tests, even though the functions are exported from the module.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`. The preferred pattern is to create a Promise that resolves when an expected event occurs, then await that Promise.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: Tests in the parse-server repository should use promise-based approaches rather than callback patterns with `done()`. Use a pattern where a Promise is created that resolves when the event occurs, then await that promise.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-05-04T20:41:05.147Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1312-1338
Timestamp: 2025-05-04T20:41:05.147Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`.

Applied to files:

  • benchmark/performance.js
📚 Learning: 2025-08-27T12:33:06.237Z
Learnt from: EmpiDev
Repo: parse-community/parse-server PR: 9770
File: src/triggers.js:467-477
Timestamp: 2025-08-27T12:33:06.237Z
Learning: In the Parse Server codebase, maybeRunAfterFindTrigger is called in production with Parse.Query objects constructed via withJSON(), so the plain object query handling bug only affects tests, not production code paths.

Applied to files:

  • benchmark/performance.js
🧬 Code graph analysis (2)
benchmark/MongoLatencyWrapper.js (2)
benchmark/performance.js (3)
  • require (15-15)
  • require (16-16)
  • require (17-17)
src/Adapters/Storage/Mongo/MongoCollection.js (1)
  • Collection (2-2)
benchmark/performance.js (1)
benchmark/MongoLatencyWrapper.js (2)
  • require (20-20)
  • result (40-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: Node 22
  • GitHub Check: PostgreSQL 15, PostGIS 3.5
  • GitHub Check: MongoDB 6, ReplicaSet
  • GitHub Check: PostgreSQL 15, PostGIS 3.4
  • GitHub Check: Node 20
  • GitHub Check: PostgreSQL 15, PostGIS 3.3
  • GitHub Check: PostgreSQL 16, PostGIS 3.5
  • GitHub Check: PostgreSQL 18, PostGIS 3.6
  • GitHub Check: PostgreSQL 17, PostGIS 3.5
  • GitHub Check: Node 18
  • GitHub Check: Redis Cache
  • GitHub Check: MongoDB 8, ReplicaSet
  • GitHub Check: MongoDB 7, ReplicaSet
  • GitHub Check: Docker Build
  • GitHub Check: Code Analysis (javascript)
  • GitHub Check: Benchmarks
🔇 Additional comments (7)
benchmark/MongoLatencyWrapper.js (2)

79-88: LGTM: Parameter validation and zero-latency handling.

The validation correctly ensures latencyMs is a non-negative number, and the special case for zero latency is a sensible optimization that avoids unnecessary wrapping overhead.


94-116: I need the actual method list to verify. Let me request the official API documentation reference:

Methods in the list are standard MongoDB operations and should be available in [email protected].

Based on the MongoDB Node.js driver documentation, the Collection class allows for insert/find/update/delete and other command operations, and mongodb version 6.20.0 was released on 2025-09-17. The listed methods—find, insertOne, insertMany, updateOne, updateMany, deleteOne, deleteMany, aggregate, createIndex, dropIndex, etc.—are standard CRUD and indexing operations that have been core to the MongoDB driver across multiple versions.

However, since I cannot access the complete v6.20.0-specific API documentation through available sources, I recommend verifying this list against either:

  1. The official MongoDB Node.js driver API documentation
  2. The driver's TypeScript type definitions in the node_modules folder after installation
  3. The HISTORY.md file in the MongoDB driver repository to check for any breaking changes between your current version and 6.20.0
benchmark/performance.js (5)

99-101: LGTM: Simple helper for resetting Parse SDK state.

The resetParseServer() helper correctly resets the Parse SDK's server URL between benchmarks.


115-198: LGTM: measureOperation refactored to config object pattern with enhanced features.

The refactor to accept a configuration object is a good improvement that centralizes parameters and makes the API more extensible. The additions of:

  • Optional warmup phase (lines 131-138)
  • Artificial DB latency support (lines 124-128, 191-196)
  • Progress logging (lines 152-159)
  • Percentile statistics with outlier filtering (lines 163-190)

are all valuable enhancements for benchmarking. The finally block ensures the latency wrapper is properly cleaned up even if the operation fails.


206-217: LGTM: Benchmark functions updated to use config object pattern.

All existing benchmarks have been consistently refactored to use the new measureOperation config object pattern with explicit iteration counts. The changes maintain the same benchmark logic while adopting the cleaner API.

Also applies to: 238-245, 267-276, 298-306, 315-331, 340-351, 373-381


439-498: LGTM: Benchmark orchestration refactored with cleaner structure.

The refactoring to define benchmarks in an array (lines 455-464) and iterate over them (lines 467-472) is a cleaner approach than individual function calls. The consistent cleanup between benchmarks (lines 469-470) ensures each test starts with a clean state.


13-13: Now I need to check how the benchmark script is configured and whether it should be part of the published package:

Let me check the package.json scripts and publishing configuration:

Based on my investigation:

The script output confirms @actions/core is in devDependencies, not in regular dependencies. However, this is the correct configuration for a development-only tool like a benchmark script. When npm installs dependencies (npm install or npm ci), devDependencies are installed by default, making @actions/core available during development and CI environments where benchmarks typically run. The standard npm approach is to use the files property in package.json to specify what gets published, which would exclude the benchmark directory from end-user installations.

The original review comment's concern is theoretical rather than practical: devDependencies are available in all normal development scenarios, and benchmark scripts are typically excluded from published packages.


Comment on lines +30 to +72
function wrapMethod(methodName, latencyMs) {
if (!originalMethods.has(methodName)) {
originalMethods.set(methodName, Collection.prototype[methodName]);
}

const originalMethod = originalMethods.get(methodName);

Collection.prototype[methodName] = function (...args) {
// For methods that return cursors (like find, aggregate), we need to delay the execution
// but still return a cursor-like object
const result = originalMethod.apply(this, args);

// Check if result has cursor methods (toArray, forEach, etc.)
if (result && typeof result.toArray === 'function') {
// Wrap cursor methods that actually execute the query
const originalToArray = result.toArray.bind(result);
result.toArray = function() {
// Wait for the original promise to settle, then delay the result
return originalToArray().then(
value => new Promise(resolve => setTimeout(() => resolve(value), latencyMs)),
error => new Promise((_, reject) => setTimeout(() => reject(error), latencyMs))
);
};
return result;
}

// For promise-returning methods, wrap the promise with delay
if (result && typeof result.then === 'function') {
// Wait for the original promise to settle, then delay the result
return result.then(
value => new Promise(resolve => setTimeout(() => resolve(value), latencyMs)),
error => new Promise((_, reject) => setTimeout(() => reject(error), latencyMs))
);
}

// For synchronous methods, just add delay
return new Promise((resolve) => {
setTimeout(() => {
resolve(result);
}, latencyMs);
});
};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Multiple invocations will stack latency wrappers.

The wrapMethod function stores the original method only on the first invocation (line 31-33), but it unconditionally replaces Collection.prototype[methodName] on line 37. If wrapMongoDBWithLatency is called multiple times without calling unwrap() in between, the second invocation will wrap an already-wrapped method, causing latency to be applied multiple times (e.g., 2× latency on second call, 3× on third call).

Additionally, the cursor-wrapping logic only instruments toArray() (lines 45-52), but MongoDB cursors expose many other execution methods such as forEach, next, hasNext, map, and close. Operations using these methods will bypass the artificial latency, leading to inconsistent benchmark behavior.

Consider these fixes:

  1. Prevent multiple wrapping: Check if the method is already wrapped before applying a new wrapper, or throw an error if originalMethods already contains the method.

  2. Expand cursor coverage: Wrap all cursor execution methods, or document that only toArray() is supported and advise benchmark code to use it exclusively.

Apply this diff to prevent stacking:

 function wrapMethod(methodName, latencyMs) {
+  // Prevent wrapping an already-wrapped method
+  if (Collection.prototype[methodName].__isLatencyWrapped) {
+    throw new Error(`Method ${methodName} is already wrapped. Call unwrap() first.`);
+  }
+
   if (!originalMethods.has(methodName)) {
     originalMethods.set(methodName, Collection.prototype[methodName]);
   }

   const originalMethod = originalMethods.get(methodName);

   Collection.prototype[methodName] = function (...args) {
     // ... existing wrapper code ...
   };
+  
+  // Mark the method as wrapped
+  Collection.prototype[methodName].__isLatencyWrapped = true;
 }

And update the unwrap logic:

     originalMethods.forEach((originalMethod, methodName) => {
       Collection.prototype[methodName] = originalMethod;
+      delete Collection.prototype[methodName].__isLatencyWrapped;
     });

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In benchmark/MongoLatencyWrapper.js around lines 30 to 72, the wrapper
unconditionally replaces Collection.prototype[methodName] causing stacked
latency on repeated invocations and only instruments cursor.toArray(), missing
other execution methods; fix by: 1) before replacing, check if originalMethods
already has methodName and if so skip wrapping (or throw) to prevent
double-wrapping; store the original only once and avoid reassigning a wrapper
over a wrapper; 2) when wrapping cursor results, wrap all execution methods (at
minimum toArray, forEach, next, hasNext, map, close and any method with function
type) by saving their originals (bound) and replacing them with promises that
delay resolution/rejection by latencyMs; 3) ensure unwrap() restores the exact
originals saved in originalMethods and removes entries so later re-wraps behave
correctly.

Comment on lines 387 to 434
async function benchmarkQueryWithInclude() {
// Setup: Create nested object hierarchy
const Level2Class = Parse.Object.extend('Level2');
const Level1Class = Parse.Object.extend('Level1');
const RootClass = Parse.Object.extend('Root');

return measureOperation({
name: 'Query with Include (2 levels)',
skipWarmup: true,
dbLatency: 100,
iterations: 100,
operation: async () => {
// Create 10 Level2 objects
const level2Objects = [];
for (let i = 0; i < 10; i++) {
const obj = new Level2Class();
obj.set('name', `level2-${i}`);
obj.set('value', i);
level2Objects.push(obj);
}
await Parse.Object.saveAll(level2Objects);

// Create 10 Level1 objects, each pointing to a Level2 object
const level1Objects = [];
for (let i = 0; i < 10; i++) {
const obj = new Level1Class();
obj.set('name', `level1-${i}`);
obj.set('level2', level2Objects[i % level2Objects.length]);
level1Objects.push(obj);
}
await Parse.Object.saveAll(level1Objects);

// Create 10 Root objects, each pointing to a Level1 object
const rootObjects = [];
for (let i = 0; i < 10; i++) {
const obj = new RootClass();
obj.set('name', `root-${i}`);
obj.set('level1', level1Objects[i % level1Objects.length]);
rootObjects.push(obj);
}
await Parse.Object.saveAll(rootObjects);

const query = new Parse.Query('Root');
query.include('level1.level2');
await query.find();
},
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Benchmark measures object creation instead of query performance.

The benchmarkQueryWithInclude function creates all objects (30 per iteration: 10 Level2, 10 Level1, 10 Root) inside the operation callback (lines 398-432). This means the benchmark primarily measures object creation time rather than query-with-include performance, which defeats the purpose of this benchmark.

Additionally, dbLatency: 100 is hardcoded (line 396) without explanation. Why 100ms specifically? This should either be configurable, removed, or documented.

Apply this diff to move object creation outside the measured operation:

 async function benchmarkQueryWithInclude() {
-  // Setup: Create nested object hierarchy
   const Level2Class = Parse.Object.extend('Level2');
   const Level1Class = Parse.Object.extend('Level1');
   const RootClass = Parse.Object.extend('Root');
+
+  // Setup: Create nested object hierarchy (done once before benchmark)
+  const level2Objects = [];
+  for (let i = 0; i < 10; i++) {
+    const obj = new Level2Class();
+    obj.set('name', `level2-${i}`);
+    obj.set('value', i);
+    level2Objects.push(obj);
+  }
+  await Parse.Object.saveAll(level2Objects);
+
+  const level1Objects = [];
+  for (let i = 0; i < 10; i++) {
+    const obj = new Level1Class();
+    obj.set('name', `level1-${i}`);
+    obj.set('level2', level2Objects[i % level2Objects.length]);
+    level1Objects.push(obj);
+  }
+  await Parse.Object.saveAll(level1Objects);
+
+  const rootObjects = [];
+  for (let i = 0; i < 10; i++) {
+    const obj = new RootClass();
+    obj.set('name', `root-${i}`);
+    obj.set('level1', level1Objects[i % level1Objects.length]);
+    rootObjects.push(obj);
+  }
+  await Parse.Object.saveAll(rootObjects);

   return measureOperation({
     name: 'Query with Include (2 levels)',
     skipWarmup: true,
-    dbLatency: 100,
+    // dbLatency: 100, // Remove or document why 100ms is needed
     iterations: 100,
     operation: async () => {
-      // Create 10 Level2 objects
-      const level2Objects = [];
-      for (let i = 0; i < 10; i++) {
-        const obj = new Level2Class();
-        obj.set('name', `level2-${i}`);
-        obj.set('value', i);
-        level2Objects.push(obj);
-      }
-      await Parse.Object.saveAll(level2Objects);
-
-      // Create 10 Level1 objects, each pointing to a Level2 object
-      const level1Objects = [];
-      for (let i = 0; i < 10; i++) {
-        const obj = new Level1Class();
-        obj.set('name', `level1-${i}`);
-        obj.set('level2', level2Objects[i % level2Objects.length]);
-        level1Objects.push(obj);
-      }
-      await Parse.Object.saveAll(level1Objects);
-
-      // Create 10 Root objects, each pointing to a Level1 object
-      const rootObjects = [];
-      for (let i = 0; i < 10; i++) {
-        const obj = new RootClass();
-        obj.set('name', `root-${i}`);
-        obj.set('level1', level1Objects[i % level1Objects.length]);
-        rootObjects.push(obj);
-      }
-      await Parse.Object.saveAll(rootObjects);
-
+      // Only measure the query with include
       const query = new Parse.Query('Root');
       query.include('level1.level2');
       await query.find();
     },
   });
 }
🤖 Prompt for AI Agents
In benchmark/performance.js around lines 387 to 434, the current operation
measured inside benchmarkQueryWithInclude creates 10 Level2, 10 Level1 and 10
Root objects per iteration so the benchmark measures object creation instead of
the query.include performance; move the object creation and Parse.Object.saveAll
calls out of the operation callback so they run once before measureOperation
(create and persist level2Objects, level1Objects and rootObjects once, re-use
them inside the operation), then inside operation only run the Parse.Query with
include and find(); also remove or expose the hardcoded dbLatency: 100 (either
drop it, read from config/env, or add a short comment explaining why 100ms) so
the benchmark latency is explicit and configurable.

@mtrezza mtrezza deleted the fix/schema-race-condition branch November 17, 2025 00:15
@parseplatformorg
Copy link
Contributor

🎉 This change has been released in version 8.5.0-alpha.8

@parseplatformorg parseplatformorg added the state:released-alpha Released as alpha version label Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

state:released-alpha Released as alpha version

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants