This change is for general scheduler improvement.
Change-Id: Idef278a9551e6d7d3c1a945dcfd8804cbc7d6aff
Signed-off-by: Puja Gupta <pujag@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I058e9dd5613485cb0171a1b84e28868711d5185f
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I5d89acdde73f5379d68ebc8513d0bbeaac128f5d
Signed-off-by: Abhijeet Dharmapurikar <adharmap@codeaurora.org>
Signed-off-by: Jonathan Avila <avilaj@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: Ib963aef88d85e15fcd19cda3d3f0944b530239ab
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I556b873cc46911953204792d60c60bf151345b1e
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
The system is killed when the memory allocation is failed for
the curr/prev window CPU arrays. Since these are small allocations
and they never fail except in scenarios like
(1) Testing with random memory failures induced
(2) When a fatal signal is pending for the task which is doing
allocations.
Instead of killing the system, use __GFP_NOFAIL flag.
Change-Id: If284814db5a6504b2f039053856ff30c2842808b
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: Ie37ab752a4d69569bce506b0a12715bb45ece79e
Co-developed-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I4c56a1111d983b2a690cbce03787ff1d82f6f506
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I8a65125f4f5a123f5a9551a6a4876e81d1b77dcf
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I090ed8f91592646f11dd65bc83357945f02702cd
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I7b080f2acc4ae978a0086b08ddc81a0ef98f24e6
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
Commit '5ec8b59172b4e("arm64: topology: restrict updating siblings_masks
to online cpus only")' update cpu_topology sibling mask with only online
cpus. This causes issues with walt's cluster initialization code which
assumes that cpu_coregroup_mask has all possible sibling cpus. In cases
when the system is booted with limited online cpus, using maxcpu parameter,
walt's cluster initialization code will incorrectly populate less number
of cpus than physically present and doesn't have an opportunity to
correct them later.
The issue can be fixed by populating sched_cluster cpus either with
actual device-tree provided cluster cpus or with cpufreq->related_cpus.
But it is possible that, cluster cpus and cpufreq related cpus could be
different, so use device-tree populated cluster cpus as scheduler
cluster cpus.
Add possible sibling cpu support for cpu_topology.
Change-Id: I317771e85cc03fb3998457a30dae48f4f72ca546
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: If7702cc7484fa74838f76c6e1406b568e6997d7d
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: Iba2638d0103e3c68aa8dac4325e85a06c4fd0fcc
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
When lockdep enabled, a warning fires on boot up in the WALT code.
This is because the walt_irq_work handler acquires rq-locks in
succession however, this is forbidden. To fix the warning, we use the
raw_spin_lock_nested API to tell lockdep we are intentionally acquiring
the rq-lock in a nested fashion.
Bug: 110360156
Change-Id: I8598d79632991d799edcc8808d2e2f383b7a7ad3
Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Lingutla Chandrasekhar <clingutla@codeaurora.org>
[satyap@codeaurora.org: resolve trivial merge conflict]
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: I2d3c30440ff69f87dc975b33df9d0531115375ae
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
When CONFIG_ENERGY_MODEL is not selected, compilation issues
are observed. Fix by adding check for CONFIG_ENERGY_MODEL.
Change-Id: Iafbf40596677f75e2425debb96e23d42301a9c0e
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
This change is for general scheduler improvement.
Change-Id: If04d5d50971a18652a0f3dde22dca8725d20a632
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
Scheduler keeps track of the maximum capacity among all online CPUs
in max_capacity. This is useful in checking if a given cluster/CPU
is a max capacity CPU or not. The capacity of a CPU gets updated
when its max frequency is limited by cpufreq and/or thermal. The
CPUfreq limits notifications are received via CPUfreq policy
notifier. However CPUfreq keeps the policy intact even when all
of the CPUs governed by the policy are hotplugged out. So the
CPUFREQ_REMOVE_POLICY notification never arrives and scheduler's
notion of max_capacity becomes stale. The max_capacity may get
corrected at some point later when CPUFREQ_NOTIFY notification
comes for other online CPUs. But when the hotplugged CPUs comes
online the max_capacity does not reflect since CPUFREQ_ADD_POLICY
is not sent by the cpufreq.
For example consider a system with 4 BIG and 4 little CPUs. Their
original capacities are 2048 and 1024 respectively. The max_capacity
points to 2048 when all CPUs are online. Now,
1. All 4 BIG CPUs are hotplugged out. Since there is no notification,
the max_capacity still points to 2048, which is incorrect.
2. User clips the little CPUs's max_freq by 50%. CPUFREQ_NOTIFY arrives
and max_capacity is updated by iterating all the online CPUs. At this
point max_capacity becomes 512 which is correct.
3. User removes the above limits of little CPUs. The max_capacity
becomes 1024 which is correct.
4. Now, BIG CPUs are hotplugged in. Since there is no notification,
the max_capacity still points to 1024, which is incorrect.
Fix this issue by wiring the max_capacity updates in WALT to scheduler
hotplug callbacks. Ideally we want cpufreq domain hotplug callbacks
but such notifiers are not present. So the max_capacity update is
forced even when it is not necessary but that should not be a concern.
Because CPU hotplug is supposed to be a rare event.
The scheduler hotplug callbacks happen even before the hotplug CPU is
removed from cpu_online_mask, so use cpu_active() check while evaluating
the max_capacity. Since cpu_active_mask is a subset of cpu_online_mask,
this is sufficient.
Change-Id: I97b1974e2de1a9730285715858f1ada416d92a7a
Signed-off-by: Pavankumar Kondeti <pkondeti@codeaurora.org>
This snapshot is taken from msm-4.14 as of commit 871eac76e6be567
(Merge "msm: pcie: provide option to override maximum GEN speed").
Change-Id: I8fc95a4a4650de0dc36bd979d374b9335f6af774
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>
This snapshot is taken from msm-4.14 as of
commit 871eac76e6be567 ("sched: Improve the scheduler").
Change-Id: Ib4e0b39526d3009cedebb626ece5a767d8247846
Signed-off-by: Satya Durga Srinivasu Prabhala <satyap@codeaurora.org>