# (Simpler?) Implementation with same amortized runtime and space complexity

After carefully analyzing my options to solve this problem, I decided to go with this, which deviates from the lesson plan (Does not use sliding window or two pointer pattern).

The reason was because I realized that this solution would have the same amortized runtime characteristics (i.e. the two afore mentioned patterns do not improve the runtime characteristics by a substantial amount in the worst case). But also, it simplified dealing with edge cases in the sliding window (For example, encountering 0s in the array, inserting and ejecting 0s from the sliding window; which btw, the given answer breaks when a 0 is in the array).

At a high level, the approach first generates all combinations that have a product less than the target, using 3 simple nested loops, then sorts the results based on the length of the array. The sort adds an O(NlogN) though technically it can be optimized to be O(N) using bucket sort, but it doesn’t change the amortized runtime anyways because of the `O(N^3)` caused by the nested loops.

Anyways, I hope someone finds this useful; I certainly think there is value is figuring out the right balance between complexity and performance.

``````const byLength = (a, b) => a.length - b.length;

const subArraysWithProductLessThanATarget = (arr, target) => {
const combinations = [];

for (let i = 0; i <arr.length; i++) {
for (let j = i; j < arr.length;j++) {
const comb = [];
let product = 1;
for (let k = i; k <= j; k++) {
const val = arr[k];
product *= val;
comb.push(val);
}
if (product < target)
combinations.push(comb);
}
}

return combinations.sort(byLength);
}
``````
1 Like