-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
ci: Fix performance step in CI #9931
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: Fix performance step in CI #9931
Conversation
|
🚀 Thanks for opening this pull request! |
|
Warning Rate limit exceeded@mtrezza has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 13 minutes and 41 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughIntroduces MongoLatencyWrapper.js to inject artificial latency into MongoDB operations by wrapping Collection methods. Refactors benchmark/performance.js to change measureOperation to accept configuration objects, adds dbLatency support with latency wrapper integration, improves logging infrastructure, restructures benchmark execution flow, and introduces a new benchmark for Query with Include. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Benchmark as Benchmark Code
participant Wrapper as Latency Wrapper
participant MongoDB as MongoDB Collection
participant Delay as Delay Timer
Benchmark->>Wrapper: call collection.find()
Wrapper->>MongoDB: forward find()
MongoDB-->>Wrapper: return cursor
Wrapper->>Wrapper: wrap cursor.toArray()
Wrapper-->>Benchmark: return wrapped cursor
Benchmark->>Wrapper: call cursor.toArray()
Wrapper->>MongoDB: toArray()
MongoDB-->>Wrapper: return results
Wrapper->>Delay: sleep(latencyMs)
Delay-->>Wrapper: done
Wrapper-->>Benchmark: return results
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
✅ Snyk checks have passed. No issues have been found so far.
💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse. |
Codecov Report✅ All modified and coverable lines are covered by tests.
Additional details and impacted files@@ Coverage Diff @@
## alpha #9931 +/- ##
==========================================
- Coverage 93.07% 84.04% -9.04%
==========================================
Files 187 187
Lines 15224 15224
Branches 177 177
==========================================
- Hits 14170 12795 -1375
- Misses 1042 2413 +1371
- Partials 12 16 +4 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
benchmark/performance.js (1)
24-25: Consider making LOG_ITERATIONS configurable.
LOG_ITERATIONSis hardcoded tofalse, but it could be useful for debugging to enable per-iteration logging. Consider making it configurable via an environment variable, similar toBENCHMARK_ITERATIONS.Apply this diff:
const ITERATIONS = process.env.BENCHMARK_ITERATIONS ? parseInt(process.env.BENCHMARK_ITERATIONS, 10) : undefined; -const LOG_ITERATIONS = false; +const LOG_ITERATIONS = process.env.LOG_ITERATIONS === 'true';
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
benchmark/MongoLatencyWrapper.js(1 hunks)benchmark/performance.js(11 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-11-08T13:46:04.940Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 0
File: :0-0
Timestamp: 2025-11-08T13:46:04.940Z
Learning: For new Parse Server options, verify that the option is documented in src/Options/index.js and that npm run definitions has been executed to reflect changes in src/Options/docs.js and src/Options/Definitions.js. README.md documentation is a bonus but not required for new options.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-04-30T19:31:35.344Z
Learnt from: RahulLanjewar93
Repo: parse-community/parse-server PR: 9744
File: spec/ParseLiveQuery.spec.js:0-0
Timestamp: 2025-04-30T19:31:35.344Z
Learning: In the Parse Server codebase, the functions in QueryTools.js are typically tested through end-to-end behavior tests rather than direct unit tests, even though the functions are exported from the module.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`. The preferred pattern is to create a Promise that resolves when an expected event occurs, then await that Promise.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-05-09T09:59:06.289Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1340-1375
Timestamp: 2025-05-09T09:59:06.289Z
Learning: Tests in the parse-server repository should use promise-based approaches rather than callback patterns with `done()`. Use a pattern where a Promise is created that resolves when the event occurs, then await that promise.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-05-04T20:41:05.147Z
Learnt from: mtrezza
Repo: parse-community/parse-server PR: 9445
File: spec/ParseLiveQuery.spec.js:1312-1338
Timestamp: 2025-05-04T20:41:05.147Z
Learning: New tests in the parse-server repository should use async/await with promise-based patterns rather than callback patterns with `done()`.
Applied to files:
benchmark/performance.js
📚 Learning: 2025-08-27T12:33:06.237Z
Learnt from: EmpiDev
Repo: parse-community/parse-server PR: 9770
File: src/triggers.js:467-477
Timestamp: 2025-08-27T12:33:06.237Z
Learning: In the Parse Server codebase, maybeRunAfterFindTrigger is called in production with Parse.Query objects constructed via withJSON(), so the plain object query handling bug only affects tests, not production code paths.
Applied to files:
benchmark/performance.js
🧬 Code graph analysis (2)
benchmark/MongoLatencyWrapper.js (2)
benchmark/performance.js (3)
require(15-15)require(16-16)require(17-17)src/Adapters/Storage/Mongo/MongoCollection.js (1)
Collection(2-2)
benchmark/performance.js (1)
benchmark/MongoLatencyWrapper.js (2)
require(20-20)result(40-40)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
- GitHub Check: Node 22
- GitHub Check: PostgreSQL 15, PostGIS 3.5
- GitHub Check: MongoDB 6, ReplicaSet
- GitHub Check: PostgreSQL 15, PostGIS 3.4
- GitHub Check: Node 20
- GitHub Check: PostgreSQL 15, PostGIS 3.3
- GitHub Check: PostgreSQL 16, PostGIS 3.5
- GitHub Check: PostgreSQL 18, PostGIS 3.6
- GitHub Check: PostgreSQL 17, PostGIS 3.5
- GitHub Check: Node 18
- GitHub Check: Redis Cache
- GitHub Check: MongoDB 8, ReplicaSet
- GitHub Check: MongoDB 7, ReplicaSet
- GitHub Check: Docker Build
- GitHub Check: Code Analysis (javascript)
- GitHub Check: Benchmarks
🔇 Additional comments (7)
benchmark/MongoLatencyWrapper.js (2)
79-88: LGTM: Parameter validation and zero-latency handling.The validation correctly ensures
latencyMsis a non-negative number, and the special case for zero latency is a sensible optimization that avoids unnecessary wrapping overhead.
94-116: I need the actual method list to verify. Let me request the official API documentation reference:Methods in the list are standard MongoDB operations and should be available in [email protected].
Based on the MongoDB Node.js driver documentation, the Collection class allows for insert/find/update/delete and other command operations, and mongodb version 6.20.0 was released on 2025-09-17. The listed methods—find, insertOne, insertMany, updateOne, updateMany, deleteOne, deleteMany, aggregate, createIndex, dropIndex, etc.—are standard CRUD and indexing operations that have been core to the MongoDB driver across multiple versions.
However, since I cannot access the complete v6.20.0-specific API documentation through available sources, I recommend verifying this list against either:
- The official MongoDB Node.js driver API documentation
- The driver's TypeScript type definitions in the node_modules folder after installation
- The HISTORY.md file in the MongoDB driver repository to check for any breaking changes between your current version and 6.20.0
benchmark/performance.js (5)
99-101: LGTM: Simple helper for resetting Parse SDK state.The
resetParseServer()helper correctly resets the Parse SDK's server URL between benchmarks.
115-198: LGTM: measureOperation refactored to config object pattern with enhanced features.The refactor to accept a configuration object is a good improvement that centralizes parameters and makes the API more extensible. The additions of:
- Optional warmup phase (lines 131-138)
- Artificial DB latency support (lines 124-128, 191-196)
- Progress logging (lines 152-159)
- Percentile statistics with outlier filtering (lines 163-190)
are all valuable enhancements for benchmarking. The
finallyblock ensures the latency wrapper is properly cleaned up even if the operation fails.
206-217: LGTM: Benchmark functions updated to use config object pattern.All existing benchmarks have been consistently refactored to use the new
measureOperationconfig object pattern with explicit iteration counts. The changes maintain the same benchmark logic while adopting the cleaner API.Also applies to: 238-245, 267-276, 298-306, 315-331, 340-351, 373-381
439-498: LGTM: Benchmark orchestration refactored with cleaner structure.The refactoring to define benchmarks in an array (lines 455-464) and iterate over them (lines 467-472) is a cleaner approach than individual function calls. The consistent cleanup between benchmarks (lines 469-470) ensures each test starts with a clean state.
13-13: Now I need to check how the benchmark script is configured and whether it should be part of the published package:Let me check the package.json scripts and publishing configuration:
Based on my investigation:
The script output confirms @actions/core is in devDependencies, not in regular dependencies. However, this is the correct configuration for a development-only tool like a benchmark script. When npm installs dependencies (npm install or npm ci), devDependencies are installed by default, making @actions/core available during development and CI environments where benchmarks typically run. The standard npm approach is to use the files property in package.json to specify what gets published, which would exclude the benchmark directory from end-user installations.
The original review comment's concern is theoretical rather than practical: devDependencies are available in all normal development scenarios, and benchmark scripts are typically excluded from published packages.
| function wrapMethod(methodName, latencyMs) { | ||
| if (!originalMethods.has(methodName)) { | ||
| originalMethods.set(methodName, Collection.prototype[methodName]); | ||
| } | ||
|
|
||
| const originalMethod = originalMethods.get(methodName); | ||
|
|
||
| Collection.prototype[methodName] = function (...args) { | ||
| // For methods that return cursors (like find, aggregate), we need to delay the execution | ||
| // but still return a cursor-like object | ||
| const result = originalMethod.apply(this, args); | ||
|
|
||
| // Check if result has cursor methods (toArray, forEach, etc.) | ||
| if (result && typeof result.toArray === 'function') { | ||
| // Wrap cursor methods that actually execute the query | ||
| const originalToArray = result.toArray.bind(result); | ||
| result.toArray = function() { | ||
| // Wait for the original promise to settle, then delay the result | ||
| return originalToArray().then( | ||
| value => new Promise(resolve => setTimeout(() => resolve(value), latencyMs)), | ||
| error => new Promise((_, reject) => setTimeout(() => reject(error), latencyMs)) | ||
| ); | ||
| }; | ||
| return result; | ||
| } | ||
|
|
||
| // For promise-returning methods, wrap the promise with delay | ||
| if (result && typeof result.then === 'function') { | ||
| // Wait for the original promise to settle, then delay the result | ||
| return result.then( | ||
| value => new Promise(resolve => setTimeout(() => resolve(value), latencyMs)), | ||
| error => new Promise((_, reject) => setTimeout(() => reject(error), latencyMs)) | ||
| ); | ||
| } | ||
|
|
||
| // For synchronous methods, just add delay | ||
| return new Promise((resolve) => { | ||
| setTimeout(() => { | ||
| resolve(result); | ||
| }, latencyMs); | ||
| }); | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Multiple invocations will stack latency wrappers.
The wrapMethod function stores the original method only on the first invocation (line 31-33), but it unconditionally replaces Collection.prototype[methodName] on line 37. If wrapMongoDBWithLatency is called multiple times without calling unwrap() in between, the second invocation will wrap an already-wrapped method, causing latency to be applied multiple times (e.g., 2× latency on second call, 3× on third call).
Additionally, the cursor-wrapping logic only instruments toArray() (lines 45-52), but MongoDB cursors expose many other execution methods such as forEach, next, hasNext, map, and close. Operations using these methods will bypass the artificial latency, leading to inconsistent benchmark behavior.
Consider these fixes:
-
Prevent multiple wrapping: Check if the method is already wrapped before applying a new wrapper, or throw an error if
originalMethodsalready contains the method. -
Expand cursor coverage: Wrap all cursor execution methods, or document that only
toArray()is supported and advise benchmark code to use it exclusively.
Apply this diff to prevent stacking:
function wrapMethod(methodName, latencyMs) {
+ // Prevent wrapping an already-wrapped method
+ if (Collection.prototype[methodName].__isLatencyWrapped) {
+ throw new Error(`Method ${methodName} is already wrapped. Call unwrap() first.`);
+ }
+
if (!originalMethods.has(methodName)) {
originalMethods.set(methodName, Collection.prototype[methodName]);
}
const originalMethod = originalMethods.get(methodName);
Collection.prototype[methodName] = function (...args) {
// ... existing wrapper code ...
};
+
+ // Mark the method as wrapped
+ Collection.prototype[methodName].__isLatencyWrapped = true;
}And update the unwrap logic:
originalMethods.forEach((originalMethod, methodName) => {
Collection.prototype[methodName] = originalMethod;
+ delete Collection.prototype[methodName].__isLatencyWrapped;
});Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In benchmark/MongoLatencyWrapper.js around lines 30 to 72, the wrapper
unconditionally replaces Collection.prototype[methodName] causing stacked
latency on repeated invocations and only instruments cursor.toArray(), missing
other execution methods; fix by: 1) before replacing, check if originalMethods
already has methodName and if so skip wrapping (or throw) to prevent
double-wrapping; store the original only once and avoid reassigning a wrapper
over a wrapper; 2) when wrapping cursor results, wrap all execution methods (at
minimum toArray, forEach, next, hasNext, map, close and any method with function
type) by saving their originals (bound) and replacing them with promises that
delay resolution/rejection by latencyMs; 3) ensure unwrap() restores the exact
originals saved in originalMethods and removes entries so later re-wraps behave
correctly.
| async function benchmarkQueryWithInclude() { | ||
| // Setup: Create nested object hierarchy | ||
| const Level2Class = Parse.Object.extend('Level2'); | ||
| const Level1Class = Parse.Object.extend('Level1'); | ||
| const RootClass = Parse.Object.extend('Root'); | ||
|
|
||
| return measureOperation({ | ||
| name: 'Query with Include (2 levels)', | ||
| skipWarmup: true, | ||
| dbLatency: 100, | ||
| iterations: 100, | ||
| operation: async () => { | ||
| // Create 10 Level2 objects | ||
| const level2Objects = []; | ||
| for (let i = 0; i < 10; i++) { | ||
| const obj = new Level2Class(); | ||
| obj.set('name', `level2-${i}`); | ||
| obj.set('value', i); | ||
| level2Objects.push(obj); | ||
| } | ||
| await Parse.Object.saveAll(level2Objects); | ||
|
|
||
| // Create 10 Level1 objects, each pointing to a Level2 object | ||
| const level1Objects = []; | ||
| for (let i = 0; i < 10; i++) { | ||
| const obj = new Level1Class(); | ||
| obj.set('name', `level1-${i}`); | ||
| obj.set('level2', level2Objects[i % level2Objects.length]); | ||
| level1Objects.push(obj); | ||
| } | ||
| await Parse.Object.saveAll(level1Objects); | ||
|
|
||
| // Create 10 Root objects, each pointing to a Level1 object | ||
| const rootObjects = []; | ||
| for (let i = 0; i < 10; i++) { | ||
| const obj = new RootClass(); | ||
| obj.set('name', `root-${i}`); | ||
| obj.set('level1', level1Objects[i % level1Objects.length]); | ||
| rootObjects.push(obj); | ||
| } | ||
| await Parse.Object.saveAll(rootObjects); | ||
|
|
||
| const query = new Parse.Query('Root'); | ||
| query.include('level1.level2'); | ||
| await query.find(); | ||
| }, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Benchmark measures object creation instead of query performance.
The benchmarkQueryWithInclude function creates all objects (30 per iteration: 10 Level2, 10 Level1, 10 Root) inside the operation callback (lines 398-432). This means the benchmark primarily measures object creation time rather than query-with-include performance, which defeats the purpose of this benchmark.
Additionally, dbLatency: 100 is hardcoded (line 396) without explanation. Why 100ms specifically? This should either be configurable, removed, or documented.
Apply this diff to move object creation outside the measured operation:
async function benchmarkQueryWithInclude() {
- // Setup: Create nested object hierarchy
const Level2Class = Parse.Object.extend('Level2');
const Level1Class = Parse.Object.extend('Level1');
const RootClass = Parse.Object.extend('Root');
+
+ // Setup: Create nested object hierarchy (done once before benchmark)
+ const level2Objects = [];
+ for (let i = 0; i < 10; i++) {
+ const obj = new Level2Class();
+ obj.set('name', `level2-${i}`);
+ obj.set('value', i);
+ level2Objects.push(obj);
+ }
+ await Parse.Object.saveAll(level2Objects);
+
+ const level1Objects = [];
+ for (let i = 0; i < 10; i++) {
+ const obj = new Level1Class();
+ obj.set('name', `level1-${i}`);
+ obj.set('level2', level2Objects[i % level2Objects.length]);
+ level1Objects.push(obj);
+ }
+ await Parse.Object.saveAll(level1Objects);
+
+ const rootObjects = [];
+ for (let i = 0; i < 10; i++) {
+ const obj = new RootClass();
+ obj.set('name', `root-${i}`);
+ obj.set('level1', level1Objects[i % level1Objects.length]);
+ rootObjects.push(obj);
+ }
+ await Parse.Object.saveAll(rootObjects);
return measureOperation({
name: 'Query with Include (2 levels)',
skipWarmup: true,
- dbLatency: 100,
+ // dbLatency: 100, // Remove or document why 100ms is needed
iterations: 100,
operation: async () => {
- // Create 10 Level2 objects
- const level2Objects = [];
- for (let i = 0; i < 10; i++) {
- const obj = new Level2Class();
- obj.set('name', `level2-${i}`);
- obj.set('value', i);
- level2Objects.push(obj);
- }
- await Parse.Object.saveAll(level2Objects);
-
- // Create 10 Level1 objects, each pointing to a Level2 object
- const level1Objects = [];
- for (let i = 0; i < 10; i++) {
- const obj = new Level1Class();
- obj.set('name', `level1-${i}`);
- obj.set('level2', level2Objects[i % level2Objects.length]);
- level1Objects.push(obj);
- }
- await Parse.Object.saveAll(level1Objects);
-
- // Create 10 Root objects, each pointing to a Level1 object
- const rootObjects = [];
- for (let i = 0; i < 10; i++) {
- const obj = new RootClass();
- obj.set('name', `root-${i}`);
- obj.set('level1', level1Objects[i % level1Objects.length]);
- rootObjects.push(obj);
- }
- await Parse.Object.saveAll(rootObjects);
-
+ // Only measure the query with include
const query = new Parse.Query('Root');
query.include('level1.level2');
await query.find();
},
});
}🤖 Prompt for AI Agents
In benchmark/performance.js around lines 387 to 434, the current operation
measured inside benchmarkQueryWithInclude creates 10 Level2, 10 Level1 and 10
Root objects per iteration so the benchmark measures object creation instead of
the query.include performance; move the object creation and Parse.Object.saveAll
calls out of the operation callback so they run once before measureOperation
(create and persist level2Objects, level1Objects and rootObjects once, re-use
them inside the operation), then inside operation only run the Parse.Query with
include and find(); also remove or expose the hardcoded dbLatency: 100 (either
drop it, read from config/env, or add a short comment explaining why 100ms) so
the benchmark latency is explicit and configurable.
|
🎉 This change has been released in version 8.5.0-alpha.8 |
Pull Request
Issue
Various optimizations.
Summary by CodeRabbit
New Features
Chores