https://source.android.com/docs/security/bulletin/2023-02-01 CVE-2022-39189 CVE-2022-39842 CVE-2022-41222 CVE-2023-20937 CVE-2023-20938 CVE-2022-0850 * tag 'ASB-2023-02-05_4.19-stable' of https://android.googlesource.com/kernel/common: Linux 4.19.272 usb: host: xhci-plat: add wakeup entry at sysfs ipv6: ensure sane device mtu in tunnels exit: Use READ_ONCE() for all oops/warn limit reads docs: Fix path paste-o for /sys/kernel/warn_count panic: Expose "warn_count" to sysfs panic: Introduce warn_limit panic: Consolidate open-coded panic_on_warn checks exit: Allow oops_limit to be disabled exit: Expose "oops_count" to sysfs exit: Put an upper limit on how often we can oops ia64: make IA64_MCA_RECOVERY bool instead of tristate h8300: Fix build errors from do_exit() to make_task_dead() transition hexagon: Fix function name in die() objtool: Add a missing comma to avoid string concatenation exit: Add and use make_task_dead. panic: unset panic_on_warn inside panic() sysctl: add a new register_sysctl_init() interface dmaengine: imx-sdma: Fix a possible memory leak in sdma_transfer_init ARM: dts: imx: Fix pca9547 i2c-mux node name x86/entry/64: Add instruction suffix to SYSRET x86/asm: Fix an assembler warning with current binutils drm/i915/display: fix compiler warning about array overrun x86/i8259: Mark legacy PIC interrupts with IRQ_LEVEL Revert "Input: synaptics - switch touchpad on HP Laptop 15-da3001TU to RMI mode" net/tg3: resolve deadlock in tg3_reset_task() during EEH net: ravb: Fix possible hang if RIS2_QFF1 happen sctp: fail if no bound addresses can be used for a given scope netrom: Fix use-after-free of a listening socket. netfilter: conntrack: fix vtag checks for ABORT/SHUTDOWN_COMPLETE ipv4: prevent potential spectre v1 gadget in ip_metrics_convert() netlink: annotate data races around sk_state netlink: annotate data races around dst_portid and dst_group netlink: annotate data races around nlk->portid netlink: remove hash::nelems check in netlink_insert netfilter: nft_set_rbtree: skip elements in transaction from garbage collection net: fix UaF in netns ops registration error path EDAC/device: Respect any driver-supplied workqueue polling value ARM: 9280/1: mm: fix warning on phys_addr_t to void pointer assignment cifs: Fix oops due to uncleared server->smbd_conn in reconnect smbd: Make upper layer decide when to destroy the transport trace_events_hist: add check for return value of 'create_hist_field' tracing: Make sure trace_printk() can output as soon as it can be used module: Don't wait for GOING modules scsi: hpsa: Fix allocation size for scsi_host_alloc() Bluetooth: hci_sync: cancel cmd_timer if hci_open failed fs: reiserfs: remove useless new_opts in reiserfs_remount perf env: Do not return pointers to local variables block: fix and cleanup bio_check_ro netfilter: conntrack: do not renew entry stuck in tcp SYN_SENT state w1: fix WARNING after calling w1_process() w1: fix deadloop in __w1_remove_master_device() tcp: avoid the lookup process failing to get sk in ehash table dmaengine: xilinx_dma: call of_node_put() when breaking out of for_each_child_of_node() dmaengine: xilinx_dma: Fix devm_platform_ioremap_resource error handling dmaengine: xilinx_dma: program hardware supported buffer length dmaengine: xilinx_dma: commonize DMA copy size calculation HID: betop: check shape of output reports net: macb: fix PTP TX timestamp failure due to packet padding dmaengine: Fix double increment of client_count in dma_chan_get() net: mlx5: eliminate anonymous module_init & module_exit usb: gadget: f_fs: Ensure ep0req is dequeued before free_request usb: gadget: f_fs: Prevent race during ffs_ep0_queue_wait HID: check empty report_list in hid_validate_values() net: mdio: validate parameter addr in mdiobus_get_phy() net: usb: sr9700: Handle negative len wifi: rndis_wlan: Prevent buffer overflow in rndis_query_oid net: nfc: Fix use-after-free in local_cleanup() phy: rockchip-inno-usb2: Fix missing clk_disable_unprepare() in rockchip_usb2phy_power_on() bpf: Fix pointer-leak due to insufficient speculative store bypass mitigation amd-xgbe: Delay AN timeout during KR training amd-xgbe: TX Flow Ctrl Registers are h/w ver dependent affs: initialize fsdata in affs_truncate() IB/hfi1: Fix expected receive setup error exit issues IB/hfi1: Reserve user expected TIDs IB/hfi1: Reject a zero-length user expected buffer tomoyo: fix broken dependency on *.conf.default EDAC/highbank: Fix memory leak in highbank_mc_probe() HID: intel_ish-hid: Add check for ishtp_dma_tx_map ARM: dts: imx6qdl-gw560x: Remove incorrect 'uart-has-rtscts' UPSTREAM: tcp: fix tcp_rmem documentation UPSTREAM: nvmem: core: skip child nodes not matching binding BACKPORT: nvmem: core: Fix a resource leak on error in nvmem_add_cells_from_of() UPSTREAM: sched/eas: Don't update misfit status if the task is pinned BACKPORT: arm64: link with -z norelro for LLD or aarch64-elf UPSTREAM: driver: core: Fix list corruption after device_del() UPSTREAM: coresight: tmc-etr: Fix barrier packet insertion for perf buffer UPSTREAM: f2fs: fix double free of unicode map BACKPORT: net: xfrm: fix memory leak in xfrm_user_policy() UPSTREAM: xfrm/compat: Don't allocate memory with __GFP_ZERO UPSTREAM: xfrm/compat: memset(0) 64-bit padding at right place UPSTREAM: xfrm/compat: Translate by copying XFRMA_UNSPEC attribute UPSTREAM: scsi: ufs: Fix missing brace warning for old compilers UPSTREAM: arm64: vdso32: make vdso32 install conditional UPSTREAM: loop: unset GENHD_FL_NO_PART_SCAN on LOOP_CONFIGURE BACKPORT: drm/virtio: fix missing dma_fence_put() in virtio_gpu_execbuffer_ioctl() BACKPORT: sched/uclamp: Protect uclamp fast path code with static key BACKPORT: sched/uclamp: Fix initialization of struct uclamp_rq UPSTREAM: coresight: etmv4: Fix CPU power management setup in probe() function UPSTREAM: arm64: vdso: Add --eh-frame-hdr to ldflags BACKPORT: arm64: vdso: Add '-Bsymbolic' to ldflags UPSTREAM: drm/virtio: fix a wait_event condition BACKPORT: sched/topology: Don't try to build empty sched domains BACKPORT: binder: prevent UAF read in print_binder_transaction_log_entry() BACKPORT: copy_process(): don't use ksys_close() on cleanups BACKPORT: arm64: vdso: Remove unnecessary asm-offsets.c definitions UPSTREAM: locking/lockdep, cpu/hotplug: Annotate AP thread Revert "xhci: Add a flag to disable USB3 lpm on a xhci root port level." BACKPORT: mac80211_hwsim: add concurrent channels scanning support over virtio BACKPORT: mac80211_hwsim: add frame transmission support over virtio This allows communication with external entities. BACKPORT: driver core: Skip unnecessary work when device doesn't have sync_state() Linux 4.19.271 x86/fpu: Use _Alignof to avoid undefined behavior in TYPE_ALIGN Revert "ext4: generalize extents status tree search functions" Revert "ext4: add new pending reservation mechanism" Revert "ext4: fix reserved cluster accounting at delayed write time" Revert "ext4: fix delayed allocation bug in ext4_clu_mapped for bigalloc + inline" gsmi: fix null-deref in gsmi_get_variable serial: atmel: fix incorrect baudrate setup serial: pch_uart: Pass correct sg to dma_unmap_sg() usb-storage: apply IGNORE_UAS only for HIKSEMI MD202 on RTL9210 usb: gadget: f_ncm: fix potential NULL ptr deref in ncm_bitrate() usb: gadget: g_webcam: Send color matching descriptor per frame usb: typec: altmodes/displayport: Fix pin assignment calculation usb: typec: altmodes/displayport: Add pin assignment helper usb: host: ehci-fsl: Fix module alias USB: serial: cp210x: add SCALANCE LPE-9000 device id cifs: do not include page data when checking signature mmc: sunxi-mmc: Fix clock refcount imbalance during unbind comedi: adv_pci1760: Fix PWM instruction handling usb: core: hub: disable autosuspend for TI TUSB8041 USB: misc: iowarrior: fix up header size for USB_DEVICE_ID_CODEMERCS_IOW100 USB: serial: option: add Quectel EM05CN modem USB: serial: option: add Quectel EM05CN (SG) modem USB: serial: option: add Quectel EC200U modem USB: serial: option: add Quectel EM05-G (RS) modem USB: serial: option: add Quectel EM05-G (CS) modem USB: serial: option: add Quectel EM05-G (GR) modem prlimit: do_prlimit needs to have a speculation check xhci: Add a flag to disable USB3 lpm on a xhci root port level. xhci: Fix null pointer dereference when host dies usb: xhci: Check endpoint is valid before dereferencing it xhci-pci: set the dma max_seg_size nilfs2: fix general protection fault in nilfs_btree_insert() Add exception protection processing for vd in axi_chan_handle_err function f2fs: let's avoid panic if extent_tree is not created RDMA/srp: Move large values to a new enum for gcc13 net/ethtool/ioctl: return -EOPNOTSUPP if we have no phy stats pNFS/filelayout: Fix coalescing test for single DS ANDROID: usb: f_accessory: Check buffer size when initialised via composite Linux 4.19.270 serial: tegra: Change lower tolerance baud rate limit for tegra20 and tegra30 serial: tegra: Only print FIFO error message when an error occurs tty: serial: tegra: Handle RX transfer in PIO mode if DMA wasn't started Revert "usb: ulpi: defer ulpi_register on ulpi_read_id timeout" efi: fix NULL-deref in init error path arm64: cmpxchg_double*: hazard against entire exchange variable drm/virtio: Fix GEM handle creation UAF x86/resctrl: Fix task CLOSID/RMID update race x86/resctrl: Use task_curr() instead of task_struct->on_cpu to prevent unnecessary IPI iommu/mediatek-v1: Fix an error handling path in mtk_iommu_v1_probe() iommu/mediatek-v1: Add error handle for mtk_iommu_probe net/mlx5: Fix ptp max frequency adjustment range net/mlx5: Rename ptp clock info nfc: pn533: Wait for out_urb's completion in pn533_usb_send_frame() hvc/xen: lock console list traversal regulator: da9211: Use irq handler when ready EDAC/device: Fix period calculation in edac_device_reset_delay_period() x86/boot: Avoid using Intel mnemonics in AT&T syntax asm netfilter: ipset: Fix overflow before widen in the bitmap_ip_create() function. ext4: fix delayed allocation bug in ext4_clu_mapped for bigalloc + inline ext4: fix reserved cluster accounting at delayed write time ext4: add new pending reservation mechanism ext4: generalize extents status tree search functions ext4: fix uninititialized value in 'ext4_evict_inode' ext4: fix use-after-free in ext4_orphan_cleanup ext4: lost matching-pair of trace in ext4_truncate ext4: fix bug_on in __es_tree_search caused by bad quota inode quota: Factor out setup of quota inode usb: ulpi: defer ulpi_register on ulpi_read_id timeout kest.pl: Fix grub2 menu handling for rebooting ktest.pl: Fix incorrect reboot for grub2bls ktest: introduce grub2bls REBOOT_TYPE option ktest: cleanup get_grub_index ktest: introduce _get_grub_index ktest: Add support for meta characters in GRUB_MENU ALSA: hda/hdmi: fix failures at PCM open on Intel ICL and later wifi: wilc1000: sdio: fix module autoloading ipv6: raw: Deduct extension header length in rawv6_push_pending_frames platform/x86: sony-laptop: Don't turn off 0x153 keyboard backlight during probe cifs: Fix uninitialized memory read for smb311 posix symlink create ALSA: pcm: Move rwsem lock inside snd_ctl_elem_read to prevent UAF net/ulp: prevent ULP without clone op from entering the LISTEN status s390/percpu: add READ_ONCE() to arch_this_cpu_to_op_simple() perf auxtrace: Fix address filter duplicate symbol selection docs: Fix the docs build with Sphinx 6.0 net: sched: disallow noqueue for qdisc classes driver core: Fix bus_type.match() error handling in __driver_attach() parisc: Align parisc MADV_XXX constants with all other architectures mbcache: Avoid nesting of cache->c_list_lock under bit locks hfs/hfsplus: avoid WARN_ON() for sanity check, use proper error handling hfs/hfsplus: use WARN_ON for sanity check ext4: don't allow journal inode to have encrypt flag riscv: uaccess: fix type of 0 variable on error in get_user() nfsd: fix handling of readdir in v4root vs. mount upcall timeout x86/bugs: Flush IBP in ib_prctl_set() ASoC: Intel: bytcr_rt5640: Add quirk for the Advantech MICA-071 tablet udf: Fix extension of the last extent in the file caif: fix memory leak in cfctrl_linkup_request() usb: rndis_host: Secure rndis_query check against int overflow net: sched: atm: dont intepret cls results when asked to drop RDMA/mlx5: Fix validation of max_rd_atomic caps for DC net: phy: xgmiitorgmii: Fix refcount leak in xgmiitorgmii_probe net: amd-xgbe: add missed tasklet_kill nfc: Fix potential resource leaks qlcnic: prevent ->dcb use-after-free on qlcnic_dcb_enable() failure bpf: pull before calling skb_postpull_rcsum() SUNRPC: ensure the matching upcall is in-flight upon downcall ext4: fix deadlock due to mbcache entry corruption mbcache: automatically delete entries from cache on freeing ext4: fix race when reusing xattr blocks ext4: unindent codeblock in ext4_xattr_block_set() ext4: remove EA inode entry from mbcache on inode eviction mbcache: add functions to delete entry if unused mbcache: don't reclaim used entries ext4: use kmemdup() to replace kmalloc + memcpy ext4: correct inconsistent error msg in nojournal mode ext4: goto right label 'failed_mount3a' driver core: Set deferred_probe_timeout to a longer default if CONFIG_MODULES is set ravb: Fix "failed to switch device to config mode" message during unbind perf probe: Fix to get the DW_AT_decl_file and DW_AT_call_file as unsinged data perf probe: Use dwarf_attr_integrate as generic DWARF attr accessor dm thin: resume even if in FAIL mode media: s5p-mfc: Fix in register read and write for H264 media: s5p-mfc: Clear workbit to handle error condition media: s5p-mfc: Fix to handle reference queue during finishing btrfs: replace strncpy() with strscpy() btrfs: send: avoid unnecessary backref lookups when finding clone source ext4: allocate extended attribute value in vmalloc area ext4: avoid unaccounted block allocation when expanding inode ext4: initialize quota before expanding inode in setproject ioctl ext4: fix inode leak in ext4_xattr_inode_create() on an error path ext4: avoid BUG_ON when creating xattrs ext4: fix error code return to user-space in ext4_get_branch() ext4: fix corruption when online resizing a 1K bigalloc fs ext4: init quota for 'old.inode' in 'ext4_rename' ext4: fix bug_on in __es_tree_search caused by bad boot loader inode ext4: add helper to check quota inums ext4: fix undefined behavior in bit shift for ext4_check_flag_values ext4: add inode table check in __ext4_get_inode_loc to aovid possible infinite loop drm/vmwgfx: Validate the box size for the snooped cursor drm/connector: send hotplug uevent on connector cleanup device_cgroup: Roll back to original exceptions after copy failure parisc: led: Fix potential null-ptr-deref in start_task() iommu/amd: Fix ivrs_acpihid cmdline parsing code crypto: n2 - add missing hash statesize PCI/sysfs: Fix double free in error path PCI: Fix pci_device_is_present() for VFs by checking PF ipmi: fix use after free in _ipmi_destroy_user() ima: Fix a potential NULL pointer access in ima_restore_measurement_list ipmi: fix long wait in unload when IPMI disconnect md/bitmap: Fix bitmap chunk size overflow issues cifs: fix confusing debug message media: dvb-core: Fix UAF due to refcount races at releasing media: dvb-core: Fix double free in dvb_register_device() ARM: 9256/1: NWFPE: avoid compiler-generated __aeabi_uldivmod tracing: Fix infinite loop in tracing_read_pipe on overflowed print_trace_line x86/microcode/intel: Do not retry microcode reloading on the APs dm cache: set needs_check flag after aborting metadata dm cache: Fix UAF in destroy() dm thin: Fix UAF in run_timer_softirq() dm thin: Use last transaction's pmd->root when commit failed dm cache: Fix ABBA deadlock between shrink_slab and dm_cache_metadata_abort binfmt: Fix error return code in load_elf_fdpic_binary() binfmt: Move install_exec_creds after setup_new_exec to match binfmt_elf selftests: Use optional USERCFLAGS and USERLDFLAGS ARM: ux500: do not directly dereference __iomem ktest.pl minconfig: Unset configs instead of just removing them soc: qcom: Select REMAP_MMIO for LLCC driver media: stv0288: use explicitly signed char SUNRPC: Don't leak netobj memory when gss_read_proxy_verf() fails tpm: tpm_tis: Add the missed acpi_put_table() to fix memory leak tpm: tpm_crb: Add the missed acpi_put_table() to fix memory leak mmc: vub300: fix warning - do not call blocking ops when !TASK_RUNNING md: fix a crash in mempool_free pnode: terminate at peers of source ALSA: line6: fix stack overflow in line6_midi_transmit ALSA: line6: correct midi status byte when receiving data from podxt ovl: Use ovl mounter's fsuid and fsgid in ovl_link() hfsplus: fix bug causing custom uid and gid being unable to be assigned with mount HID: plantronics: Additional PIDs for double volume key presses quirk powerpc/rtas: avoid scheduling in rtas_os_term() powerpc/rtas: avoid device tree lookups in rtas_os_term() ata: ahci: Fix PCS quirk application for suspend media: dvbdev: fix refcnt bug media: dvbdev: fix build warning due to comments gcov: add support for checksum field iio: adc: ad_sigma_delta: do not use internal iio_dev lock reiserfs: Add missing calls to reiserfs_security_free() HID: wacom: Ensure bootloader PID is usable in hidraw mode usb: dwc3: core: defer probe on ulpi_read_id timeout pstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXES pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion ASoC: rt5670: Remove unbalanced pm_runtime_put() ASoC: rockchip: spdif: Add missing clk_disable_unprepare() in rk_spdif_runtime_resume() ASoC: wm8994: Fix potential deadlock ASoC: rockchip: pdm: Add missing clk_disable_unprepare() in rockchip_pdm_runtime_resume() ASoC: mediatek: mt8173-rt5650-rt5514: fix refcount leak in mt8173_rt5650_rt5514_dev_probe() orangefs: Fix kmemleak in orangefs_prepare_debugfs_help_string() drm/sti: Fix return type of sti_{dvo,hda,hdmi}_connector_mode_valid() drm/fsl-dcu: Fix return type of fsl_dcu_drm_connector_mode_valid() clk: st: Fix memory leak in st_of_quadfs_setup() media: si470x: Fix use-after-free in si470x_int_in_callback() mmc: f-sdh30: Add quirks for broken timeout clock capability regulator: core: fix use_count leakage when handling boot-on blk-mq: fix possible memleak when register 'hctx' failed media: dvb-usb: fix memory leak in dvb_usb_adapter_init() media: dvbdev: adopts refcnt to avoid UAF media: dvb-frontends: fix leak of memory fw ppp: associate skb with a device at tx mrp: introduce active flags to prevent UAF when applicant uninit md/raid1: stop mdx_raid1 thread when raid1 array run failed drivers/md/md-bitmap: check the return value of md_bitmap_get_counter() drm/sti: Use drm_mode_copy() s390/lcs: Fix return type of lcs_start_xmit() s390/netiucv: Fix return type of netiucv_tx() s390/ctcm: Fix return type of ctc{mp,}m_tx() drm/amdgpu: Fix type of second parameter in trans_msg() callback igb: Do not free q_vector unless new one was allocated wifi: brcmfmac: Fix potential shift-out-of-bounds in brcmf_fw_alloc_request() hamradio: baycom_epp: Fix return type of baycom_send_packet() net: ethernet: ti: Fix return type of netcp_ndo_start_xmit() bpf: make sure skb->len != 0 when redirecting to a tunneling device ipmi: fix memleak when unload ipmi driver ASoC: codecs: rt298: Add quirk for KBL-R RVP platform wifi: ar5523: Fix use-after-free on ar5523_cmd() timed out wifi: ath9k: verify the expected usb_endpoints are present hfs: fix OOB Read in __hfs_brec_find acct: fix potential integer overflow in encode_comp_t() nilfs2: fix shift-out-of-bounds/overflow in nilfs_sb2_bad_offset() ACPICA: Fix error code path in acpi_ds_call_control_method() fs: jfs: fix shift-out-of-bounds in dbDiscardAG udf: Avoid double brelse() in udf_rename() fs: jfs: fix shift-out-of-bounds in dbAllocAG binfmt_misc: fix shift-out-of-bounds in check_special_flags net: stream: purge sk_error_queue in sk_stream_kill_queues() myri10ge: Fix an error handling path in myri10ge_probe() rxrpc: Fix missing unlock in rxrpc_do_sendmsg() net_sched: reject TCF_EM_SIMPLE case for complex ematch module skbuff: Account for tail adjustment during pull operations openvswitch: Fix flow lookup to use unmasked key rtc: mxc_v2: Add missing clk_disable_unprepare() r6040: Fix kmemleak in probe and remove nfc: pn533: Clear nfc_target before being used mISDN: hfcmulti: don't call dev_kfree_skb/kfree_skb() under spin_lock_irqsave() mISDN: hfcpci: don't call dev_kfree_skb/kfree_skb() under spin_lock_irqsave() mISDN: hfcsusb: don't call dev_kfree_skb/kfree_skb() under spin_lock_irqsave() nfsd: under NFSv4.1, fix double svc_xprt_put on rpc_create failure rtc: st-lpc: Add missing clk_disable_unprepare in st_rtc_probe() selftests/powerpc: Fix resource leaks powerpc/hv-gpci: Fix hv_gpci event list powerpc/83xx/mpc832x_rdb: call platform_device_put() in error case in of_fsl_spi_probe() powerpc/perf: callchain validate kernel stack pointer bounds powerpc/xive: add missing iounmap() in error path in xive_spapr_populate_irq_data() cxl: Fix refcount leak in cxl_calc_capp_routing powerpc/52xx: Fix a resource leak in an error handling path macintosh/macio-adb: check the return value of ioremap() macintosh: fix possible memory leak in macio_add_one_device() iommu/fsl_pamu: Fix resource leak in fsl_pamu_probe() iommu/amd: Fix pci device refcount leak in ppr_notifier() rtc: snvs: Allow a time difference on clock register read include/uapi/linux/swab: Fix potentially missing __always_inline HSI: omap_ssi_core: Fix error handling in ssi_init() perf symbol: correction while adjusting symbol power: supply: fix residue sysfs file in error handle route of __power_supply_register() HSI: omap_ssi_core: fix possible memory leak in ssi_probe() HSI: omap_ssi_core: fix unbalanced pm_runtime_disable() fbdev: uvesafb: Fixes an error handling path in uvesafb_probe() fbdev: vermilion: decrease reference count in error path fbdev: via: Fix error in via_core_init() fbdev: pm2fb: fix missing pci_disable_device() fbdev: ssd1307fb: Drop optional dependency samples: vfio-mdev: Fix missing pci_disable_device() in mdpy_fb_probe() tracing/hist: Fix issue of losting command info in error_log usb: storage: Add check for kcalloc i2c: ismt: Fix an out-of-bounds bug in ismt_access() vme: Fix error not catched in fake_init() staging: rtl8192e: Fix potential use-after-free in rtllib_rx_Monitor() staging: rtl8192u: Fix use after free in ieee80211_rx() i2c: pxa-pci: fix missing pci_disable_device() on error in ce4100_i2c_probe chardev: fix error handling in cdev_device_add() mcb: mcb-parse: fix error handing in chameleon_parse_gdd() drivers: mcb: fix resource leak in mcb_probe() usb: gadget: f_hid: fix refcount leak on error path usb: gadget: f_hid: fix f_hidg lifetime vs cdev usb: gadget: f_hid: optional SETUP/SET_REPORT mode cxl: fix possible null-ptr-deref in cxl_pci_init_afu|adapter() cxl: fix possible null-ptr-deref in cxl_guest_init_afu|adapter() misc: sgi-gru: fix use-after-free error in gru_set_context_option, gru_fault and gru_handle_user_call_os misc: tifm: fix possible memory leak in tifm_7xx1_switch_media() test_firmware: fix memory leak in test_firmware_init() serial: sunsab: Fix error handling in sunsab_init() serial: altera_uart: fix locking in polling mode tty: serial: altera_uart_{r,t}x_chars() need only uart_port tty: serial: clean up stop-tx part in altera_uart_tx_chars() serial: pch: Fix PCI device refcount leak in pch_request_dma() serial: pl011: Do not clear RX FIFO & RX interrupt in unthrottle. serial: amba-pl011: avoid SBSA UART accessing DMACR register usb: typec: Check for ops->exit instead of ops->enter in altmode_exit staging: vme_user: Fix possible UAF in tsi148_dma_list_add usb: fotg210-udc: Fix ages old endianness issues uio: uio_dmem_genirq: Fix deadlock between irq config and handling uio: uio_dmem_genirq: Fix missing unlock in irq configuration vfio: platform: Do not pass return buffer to ACPI _RST method class: fix possible memory leak in __class_register() serial: tegra: Read DMA status before terminating tty: serial: tegra: Activate RX DMA transfer by request serial: tegra: Add PIO mode support serial: tegra: report clk rate errors serial: tegra: add support to adjust baud rate serial: tegra: add support to use 8 bytes trigger serial: tegra: set maximum num of uart ports to 8 serial: tegra: check for FIFO mode enabled status serial: tegra: avoid reg access when clk disabled drivers: dio: fix possible memory leak in dio_init() IB/IPoIB: Fix queue count inconsistency for PKEY child interfaces hwrng: geode - Fix PCI device refcount leak hwrng: amd - Fix PCI device refcount leak crypto: img-hash - Fix variable dereferenced before check 'hdev->req' orangefs: Fix sysfs not cleanup when dev init failed RDMA/hfi1: Fix error return code in parse_platform_config() scsi: snic: Fix possible UAF in snic_tgt_create() scsi: fcoe: Fix transport not deattached when fcoe_if_init() fails scsi: ipr: Fix WARNING in ipr_init() scsi: fcoe: Fix possible name leak when device_register() fails scsi: hpsa: Fix possible memory leak in hpsa_add_sas_device() scsi: hpsa: Fix error handling in hpsa_add_sas_host() crypto: tcrypt - Fix multibuffer skcipher speed test mem leak scsi: hpsa: Fix possible memory leak in hpsa_init_one() scsi: hpsa: use local workqueues instead of system workqueues RDMA/rxe: Fix NULL-ptr-deref in rxe_qp_do_cleanup() when socket create failed crypto: ccree - Make cc_debugfs_global_fini() available for module init function RDMA/hfi: Decrease PCI device reference count in error path PCI: Check for alloc failure in pci_request_irq() scsi: scsi_debug: Fix a warning in resp_write_scat() RDMA/nldev: Return "-EAGAIN" if the cm_id isn't from expected port f2fs: fix normal discard process apparmor: Fix abi check to include v8 abi apparmor: fix lockdep warning when removing a namespace apparmor: fix a memleak in multi_transaction_new() stmmac: fix potential division by 0 Bluetooth: RFCOMM: don't call kfree_skb() under spin_lock_irqsave() Bluetooth: hci_core: don't call kfree_skb() under spin_lock_irqsave() Bluetooth: hci_bcsp: don't call kfree_skb() under spin_lock_irqsave() Bluetooth: hci_h5: don't call kfree_skb() under spin_lock_irqsave() Bluetooth: hci_qca: don't call kfree_skb() under spin_lock_irqsave() Bluetooth: btusb: don't call kfree_skb() under spin_lock_irqsave() ntb_netdev: Use dev_kfree_skb_any() in interrupt context net: lan9303: Fix read error execution path net: amd-xgbe: Check only the minimum speed for active/passive cables net: amd-xgbe: Fix logic around active and passive cables net: amd: lance: don't call dev_kfree_skb() under spin_lock_irqsave() hamradio: don't call dev_kfree_skb() under spin_lock_irqsave() net: ethernet: dnet: don't call dev_kfree_skb() under spin_lock_irqsave() net: emaclite: don't call dev_kfree_skb() under spin_lock_irqsave() net: apple: bmac: don't call dev_kfree_skb() under spin_lock_irqsave() net: apple: mace: don't call dev_kfree_skb() under spin_lock_irqsave() net/tunnel: wait until all sk_user_data reader finish before releasing the sock net: farsync: Fix kmemleak when rmmods farsync ethernet: s2io: don't call dev_kfree_skb() under spin_lock_irqsave() drivers: net: qlcnic: Fix potential memory leak in qlcnic_sriov_init() net: defxx: Fix missing err handling in dfx_init() net: vmw_vsock: vmci: Check memcpy_from_msg() clk: socfpga: use clk_hw_register for a5/c5 clk: socfpga: clk-pll: Remove unused variable 'rc' blktrace: Fix output non-blktrace event when blk_classic option enabled wifi: brcmfmac: Fix error return code in brcmf_sdio_download_firmware() rtl8xxxu: add enumeration for channel bandwidth wifi: rtl8xxxu: Add __packed to struct rtl8723bu_c2h clk: samsung: Fix memory leak in _samsung_clk_register_pll() media: coda: Add check for kmalloc media: coda: Add check for dcoda_iram_alloc media: c8sectpfe: Add of_node_put() when breaking out of loop mmc: mmci: fix return value check of mmc_add_host() mmc: wbsd: fix return value check of mmc_add_host() mmc: via-sdmmc: fix return value check of mmc_add_host() mmc: meson-gx: fix return value check of mmc_add_host() mmc: atmel-mci: fix return value check of mmc_add_host() mmc: wmt-sdmmc: fix return value check of mmc_add_host() mmc: vub300: fix return value check of mmc_add_host() mmc: toshsd: fix return value check of mmc_add_host() mmc: rtsx_usb_sdmmc: fix return value check of mmc_add_host() mmc: mxcmmc: fix return value check of mmc_add_host() mmc: moxart: fix return value check of mmc_add_host() NFSv4.x: Fail client initialisation if state manager thread can't run SUNRPC: Fix missing release socket in rpc_sockname() ALSA: mts64: fix possible null-ptr-defer in snd_mts64_interrupt media: saa7164: fix missing pci_disable_device() regulator: core: fix module refcount leak in set_supply() wifi: cfg80211: Fix not unregister reg_pdev when load_builtin_regdb_keys() fails bonding: uninitialized variable in bond_miimon_inspect() ASoC: pcm512x: Fix PM disable depth imbalance in pcm512x_probe drm/amdgpu: Fix PCI device refcount leak in amdgpu_atrm_get_bios() drm/radeon: Fix PCI device refcount leak in radeon_atrm_get_bios() ALSA: asihpi: fix missing pci_disable_device() NFSv4: Fix a deadlock between nfs4_open_recover_helper() and delegreturn NFSv4.2: Fix a memory stomp in decode_attr_security_label drm/tegra: Add missing clk_disable_unprepare() in tegra_dc_probe() media: s5p-mfc: Add variant data for MFC v7 hardware for Exynos 3250 SoC media: dvb-usb: az6027: fix null-ptr-deref in az6027_i2c_xfer() media: dvb-core: Fix ignored return value in dvb_register_frontend() pinctrl: pinconf-generic: add missing of_node_put() media: imon: fix a race condition in send_packet() drbd: remove call to memset before free device/resource/connection mtd: maps: pxa2xx-flash: fix memory leak in probe bonding: Export skip slave logic to function clk: rockchip: Fix memory leak in rockchip_clk_register_pll() ALSA: seq: fix undefined behavior in bit shift for SNDRV_SEQ_FILTER_USE_EVENT HID: hid-sensor-custom: set fixed size for custom attributes media: platform: exynos4-is: Fix error handling in fimc_md_init() media: solo6x10: fix possible memory leak in solo_sysfs_init() Input: elants_i2c - properly handle the reset GPIO when power is off mtd: lpddr2_nvm: Fix possible null-ptr-deref wifi: ath10k: Fix return value in ath10k_pci_init() ima: Fix misuse of dereference of pointer in template_desc_init_fields() regulator: core: fix unbalanced of node refcount in regulator_dev_lookup() ASoC: pxa: fix null-pointer dereference in filter() drm/radeon: Add the missed acpi_put_table() to fix memory leak net, proc: Provide PROC_FS=n fallback for proc_create_net_single_write() media: camss: Clean up received buffers on failed start of streaming wifi: rsi: Fix handling of 802.3 EAPOL frames sent via control port mtd: Fix device name leak when register device failed in add_mtd_device() media: vivid: fix compose size exceed boundary spi: Update reference to struct spi_controller can: kvaser_usb: Compare requested bittiming parameters with actual parameters in do_set_{,data}_bittiming can: kvaser_usb: Add struct kvaser_usb_busparams can: kvaser_usb_leaf: Fix bogus restart events can: kvaser_usb_leaf: Fix wrong CAN state after stopping can: kvaser_usb_leaf: Fix improved state not being reported can: kvaser_usb_leaf: Set Warning state even without bus errors can: kvaser_usb: kvaser_usb_leaf: Handle CMD_ERROR_EVENT can: kvaser_usb: kvaser_usb_leaf: Rename {leaf,usbcan}_cmd_error_event to {leaf,usbcan}_cmd_can_error_event can: kvaser_usb: kvaser_usb_leaf: Get capabilities from device can: kvaser_usb: do not increase tx statistics when sending error message frames media: i2c: ad5820: Fix error path pata_ipx4xx_cf: Fix unsigned comparison with less than zero wifi: rtl8xxxu: Fix reading the vendor of combo chips wifi: ath9k: hif_usb: Fix use-after-free in ath9k_hif_usb_reg_in_cb() wifi: ath9k: hif_usb: fix memory leak of urbs in ath9k_hif_usb_dealloc_tx_urbs() rapidio: devices: fix missing put_device in mport_cdev_open hfs: Fix OOB Write in hfs_asc2mac relay: fix type mismatch when allocating memory in relay_create_buf() eventfd: change int to __u64 in eventfd_signal() ifndef CONFIG_EVENTFD rapidio: fix possible UAF when kfifo_alloc() fails fs: sysv: Fix sysv_nblocks() returns wrong value MIPS: BCM63xx: Add check for NULL for clk in clk_enable platform/x86: mxm-wmi: fix memleak in mxm_wmi_call_mx[ds|mx]() PM: runtime: Do not call __rpm_callback() from rpm_idle() PM: runtime: Improve path in rpm_idle() when no callback xen/privcmd: Fix a possible warning in privcmd_ioctl_mmap_resource() x86/xen: Fix memory leak in xen_init_lock_cpu() x86/xen: Fix memory leak in xen_smp_intr_init{_pv}() xen/events: only register debug interrupt for 2-level events uprobes/x86: Allow to probe a NOP instruction with 0x66 prefix ACPICA: Fix use-after-free in acpi_ut_copy_ipackage_to_ipackage() clocksource/drivers/sh_cmt: Make sure channel clock supply is enabled rapidio: rio: fix possible name leak in rio_register_mport() rapidio: fix possible name leaks when rio_add_device() fails debugfs: fix error when writing negative value to atomic_t debugfs file lib/notifier-error-inject: fix error when writing -errno to debugfs file libfs: add DEFINE_SIMPLE_ATTRIBUTE_SIGNED for signed value cpufreq: amd_freq_sensitivity: Add missing pci_dev_put() irqchip: gic-pm: Use pm_runtime_resume_and_get() in gic_probe() perf/x86/intel/uncore: Fix reference count leak in hswep_has_limit_sbox() PNP: fix name memory leak in pnp_alloc_dev() MIPS: vpe-cmp: fix possible memory leak while module exiting MIPS: vpe-mt: fix possible memory leak while module exiting ocfs2: fix memory leak in ocfs2_stack_glue_init() proc: fixup uptime selftest timerqueue: Use rb_entry_safe() in timerqueue_getnext() perf: Fix possible memleak in pmu_dev_alloc() selftests/ftrace: event_triggers: wait longer for test_event_enable fs: don't audit the capability check in simple_xattr_list() alpha: fix syscall entry in !AUDUT_SYSCALL case cpuidle: dt: Return the correct numbers of parsed idle states tpm/tpm_crb: Fix error message in __crb_relinquish_locality() pstore: Avoid kcore oops by vmap()ing with VM_IOREMAP ARM: mmp: fix timer_read delay pstore/ram: Fix error return code in ramoops_probe() ARM: dts: turris-omnia: Add switch port 6 node ARM: dts: turris-omnia: Add ethernet aliases ARM: dts: armada-39x: Fix assigned-addresses for every PCIe Root Port ARM: dts: armada-38x: Fix assigned-addresses for every PCIe Root Port ARM: dts: armada-375: Fix assigned-addresses for every PCIe Root Port ARM: dts: armada-xp: Fix assigned-addresses for every PCIe Root Port ARM: dts: armada-370: Fix assigned-addresses for every PCIe Root Port ARM: dts: dove: Fix assigned-addresses for every PCIe Root Port arm64: dts: mediatek: mt6797: Fix 26M oscillator unit name arm64: dts: mt2712-evb: Fix vproc fixed regulators unit names arm64: dts: mt2712e: Fix unit address for pinctrl node arm64: dts: mt2712e: Fix unit_address_vs_reg warning for oscillators perf: arm_dsu: Fix hotplug callback leak in dsu_pmu_init() soc: ti: smartreflex: Fix PM disable depth imbalance in omap_sr_probe arm: dts: spear600: Fix clcd interrupt drivers: soc: ti: knav_qmss_queue: Mark knav_acc_firmwares as static ARM: dts: qcom: apq8064: fix coresight compatible usb: musb: remove extra check in musb_gadget_vbus_draw net: loopback: use NET_NAME_PREDICTABLE for name_assign_type Bluetooth: L2CAP: Fix u8 overflow igb: Initialize mailbox message for VF reset USB: serial: f81534: fix division by zero on line-speed change USB: serial: cp210x: add Kamstrup RF sniffer PIDs USB: serial: option: add Quectel EM05-G modem usb: gadget: uvc: Prevent buffer overflow in setup handler udf: Fix extending file within last block udf: Do not bother looking for prealloc extents if i_lenExtents matches i_size udf: Fix preallocation discarding at indirect extent boundary udf: Discard preallocation before extending file with a hole perf script python: Remove explicit shebang from tests/attr.c ASoC: ops: Correct bounds check for second channel on SX controls can: mcba_usb: Fix termination command argument can: sja1000: fix size of OCR_MODE_MASK define pinctrl: meditatek: Startup with the IRQs disabled ASoC: ops: Check bounds for second channel in snd_soc_put_volsw_sx() nfp: fix use-after-free in area_cache_get() block: unhash blkdev part inode when the part is deleted mm/khugepaged: invoke MMU notifiers in shmem/file collapse paths mm/khugepaged: fix GUP-fast interaction by sending IPI ANDROID: Add more hvc devices for virtio-console. Conflicts: drivers/base/core.c drivers/edac/edac_device.c drivers/hwtracing/coresight/coresight-etm4x.c drivers/net/wireless/mac80211_hwsim.c drivers/scsi/ufs/ufshcd-crypto.c drivers/usb/gadget/function/f_fs.c drivers/usb/gadget/function/f_hid.c Change-Id: Ied998db07e927ccb3376a78f044df36088d9e3b8
1438 lines
40 KiB
C
1438 lines
40 KiB
C
/*
|
|
* CPUFreq governor based on scheduler-provided CPU utilization data.
|
|
*
|
|
* Copyright (C) 2016, Intel Corporation
|
|
* Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
|
|
*
|
|
* This program is free software; you can redistribute it and/or modify
|
|
* it under the terms of the GNU General Public License version 2 as
|
|
* published by the Free Software Foundation.
|
|
*/
|
|
|
|
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
|
|
|
#include "sched.h"
|
|
|
|
#include <linux/sched/cpufreq.h>
|
|
#include <trace/events/power.h>
|
|
#include <linux/sched/sysctl.h>
|
|
|
|
struct sugov_tunables {
|
|
struct gov_attr_set attr_set;
|
|
unsigned int up_rate_limit_us;
|
|
unsigned int down_rate_limit_us;
|
|
unsigned int hispeed_load;
|
|
unsigned int hispeed_freq;
|
|
unsigned int rtg_boost_freq;
|
|
bool pl;
|
|
};
|
|
|
|
struct sugov_policy {
|
|
struct cpufreq_policy *policy;
|
|
|
|
u64 last_ws;
|
|
u64 curr_cycles;
|
|
u64 last_cyc_update_time;
|
|
unsigned long avg_cap;
|
|
struct sugov_tunables *tunables;
|
|
struct list_head tunables_hook;
|
|
unsigned long hispeed_util;
|
|
unsigned long rtg_boost_util;
|
|
unsigned long max;
|
|
|
|
raw_spinlock_t update_lock; /* For shared policies */
|
|
u64 last_freq_update_time;
|
|
s64 min_rate_limit_ns;
|
|
s64 up_rate_delay_ns;
|
|
s64 down_rate_delay_ns;
|
|
unsigned int next_freq;
|
|
unsigned int cached_raw_freq;
|
|
unsigned int prev_cached_raw_freq;
|
|
|
|
/* The next fields are only needed if fast switch cannot be used: */
|
|
struct irq_work irq_work;
|
|
struct kthread_work work;
|
|
struct mutex work_lock;
|
|
struct kthread_worker worker;
|
|
struct task_struct *thread;
|
|
bool work_in_progress;
|
|
|
|
bool limits_changed;
|
|
bool need_freq_update;
|
|
};
|
|
|
|
struct sugov_cpu {
|
|
struct update_util_data update_util;
|
|
struct sugov_policy *sg_policy;
|
|
unsigned int cpu;
|
|
|
|
bool iowait_boost_pending;
|
|
unsigned int iowait_boost;
|
|
u64 last_update;
|
|
|
|
struct sched_walt_cpu_load walt_load;
|
|
|
|
unsigned long util;
|
|
unsigned int flags;
|
|
|
|
unsigned long bw_dl;
|
|
unsigned long min;
|
|
unsigned long max;
|
|
|
|
/* The field below is for single-CPU policies only: */
|
|
#ifdef CONFIG_NO_HZ_COMMON
|
|
unsigned long saved_idle_calls;
|
|
#endif
|
|
};
|
|
|
|
static DEFINE_PER_CPU(struct sugov_cpu, sugov_cpu);
|
|
static unsigned int stale_ns;
|
|
static DEFINE_PER_CPU(struct sugov_tunables *, cached_tunables);
|
|
|
|
/************************ Governor internals ***********************/
|
|
|
|
static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
|
|
{
|
|
s64 delta_ns;
|
|
|
|
/*
|
|
* Since cpufreq_update_util() is called with rq->lock held for
|
|
* the @target_cpu, our per-CPU data is fully serialized.
|
|
*
|
|
* However, drivers cannot in general deal with cross-CPU
|
|
* requests, so while get_next_freq() will work, our
|
|
* sugov_update_commit() call may not for the fast switching platforms.
|
|
*
|
|
* Hence stop here for remote requests if they aren't supported
|
|
* by the hardware, as calculating the frequency is pointless if
|
|
* we cannot in fact act on it.
|
|
*
|
|
* This is needed on the slow switching platforms too to prevent CPUs
|
|
* going offline from leaving stale IRQ work items behind.
|
|
*/
|
|
if (!cpufreq_this_cpu_can_update(sg_policy->policy))
|
|
return false;
|
|
|
|
if (unlikely(sg_policy->limits_changed)) {
|
|
sg_policy->limits_changed = false;
|
|
sg_policy->need_freq_update = true;
|
|
return true;
|
|
}
|
|
|
|
/* No need to recalculate next freq for min_rate_limit_us
|
|
* at least. However we might still decide to further rate
|
|
* limit once frequency change direction is decided, according
|
|
* to the separate rate limits.
|
|
*/
|
|
|
|
delta_ns = time - sg_policy->last_freq_update_time;
|
|
return delta_ns >= sg_policy->min_rate_limit_ns;
|
|
}
|
|
|
|
static inline bool use_pelt(void)
|
|
{
|
|
#ifdef CONFIG_SCHED_WALT
|
|
return false;
|
|
#else
|
|
return true;
|
|
#endif
|
|
}
|
|
|
|
static inline bool conservative_pl(void)
|
|
{
|
|
#ifdef CONFIG_SCHED_WALT
|
|
return sysctl_sched_conservative_pl;
|
|
#else
|
|
return false;
|
|
#endif
|
|
}
|
|
|
|
static bool sugov_up_down_rate_limit(struct sugov_policy *sg_policy, u64 time,
|
|
unsigned int next_freq)
|
|
{
|
|
s64 delta_ns;
|
|
|
|
delta_ns = time - sg_policy->last_freq_update_time;
|
|
|
|
if (next_freq > sg_policy->next_freq &&
|
|
delta_ns < sg_policy->up_rate_delay_ns)
|
|
return true;
|
|
|
|
if (next_freq < sg_policy->next_freq &&
|
|
delta_ns < sg_policy->down_rate_delay_ns)
|
|
return true;
|
|
|
|
return false;
|
|
}
|
|
|
|
static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
|
|
unsigned int next_freq)
|
|
{
|
|
if (sg_policy->next_freq == next_freq)
|
|
return false;
|
|
|
|
if (sugov_up_down_rate_limit(sg_policy, time, next_freq)) {
|
|
/* Restore cached freq as next_freq is not changed */
|
|
sg_policy->cached_raw_freq = sg_policy->prev_cached_raw_freq;
|
|
return false;
|
|
}
|
|
|
|
sg_policy->next_freq = next_freq;
|
|
sg_policy->last_freq_update_time = time;
|
|
|
|
return true;
|
|
}
|
|
|
|
static unsigned long freq_to_util(struct sugov_policy *sg_policy,
|
|
unsigned int freq)
|
|
{
|
|
return mult_frac(sg_policy->max, freq,
|
|
sg_policy->policy->cpuinfo.max_freq);
|
|
}
|
|
|
|
#define KHZ 1000
|
|
static void sugov_track_cycles(struct sugov_policy *sg_policy,
|
|
unsigned int prev_freq,
|
|
u64 upto)
|
|
{
|
|
u64 delta_ns, cycles;
|
|
u64 next_ws = sg_policy->last_ws + sched_ravg_window;
|
|
|
|
if (use_pelt())
|
|
return;
|
|
|
|
upto = min(upto, next_ws);
|
|
/* Track cycles in current window */
|
|
delta_ns = upto - sg_policy->last_cyc_update_time;
|
|
delta_ns *= prev_freq;
|
|
do_div(delta_ns, (NSEC_PER_SEC / KHZ));
|
|
cycles = delta_ns;
|
|
sg_policy->curr_cycles += cycles;
|
|
sg_policy->last_cyc_update_time = upto;
|
|
}
|
|
|
|
static void sugov_calc_avg_cap(struct sugov_policy *sg_policy, u64 curr_ws,
|
|
unsigned int prev_freq)
|
|
{
|
|
u64 last_ws = sg_policy->last_ws;
|
|
unsigned int avg_freq;
|
|
|
|
if (use_pelt())
|
|
return;
|
|
|
|
BUG_ON(curr_ws < last_ws);
|
|
if (curr_ws <= last_ws)
|
|
return;
|
|
|
|
/* If we skipped some windows */
|
|
if (curr_ws > (last_ws + sched_ravg_window)) {
|
|
avg_freq = prev_freq;
|
|
/* Reset tracking history */
|
|
sg_policy->last_cyc_update_time = curr_ws;
|
|
} else {
|
|
sugov_track_cycles(sg_policy, prev_freq, curr_ws);
|
|
avg_freq = sg_policy->curr_cycles;
|
|
avg_freq /= sched_ravg_window / (NSEC_PER_SEC / KHZ);
|
|
}
|
|
sg_policy->avg_cap = freq_to_util(sg_policy, avg_freq);
|
|
sg_policy->curr_cycles = 0;
|
|
sg_policy->last_ws = curr_ws;
|
|
}
|
|
|
|
static void sugov_fast_switch(struct sugov_policy *sg_policy, u64 time,
|
|
unsigned int next_freq)
|
|
{
|
|
struct cpufreq_policy *policy = sg_policy->policy;
|
|
unsigned int cpu;
|
|
|
|
if (!sugov_update_next_freq(sg_policy, time, next_freq))
|
|
return;
|
|
|
|
sugov_track_cycles(sg_policy, sg_policy->policy->cur, time);
|
|
next_freq = cpufreq_driver_fast_switch(policy, next_freq);
|
|
if (!next_freq)
|
|
return;
|
|
|
|
policy->cur = next_freq;
|
|
|
|
if (trace_cpu_frequency_enabled()) {
|
|
for_each_cpu(cpu, policy->cpus)
|
|
trace_cpu_frequency(next_freq, cpu);
|
|
}
|
|
}
|
|
|
|
static void sugov_deferred_update(struct sugov_policy *sg_policy, u64 time,
|
|
unsigned int next_freq)
|
|
{
|
|
if (!sugov_update_next_freq(sg_policy, time, next_freq))
|
|
return;
|
|
|
|
if (use_pelt())
|
|
sg_policy->work_in_progress = true;
|
|
irq_work_queue(&sg_policy->irq_work);
|
|
}
|
|
|
|
#define TARGET_LOAD 80
|
|
/**
|
|
* get_next_freq - Compute a new frequency for a given cpufreq policy.
|
|
* @sg_policy: schedutil policy object to compute the new frequency for.
|
|
* @util: Current CPU utilization.
|
|
* @max: CPU capacity.
|
|
*
|
|
* If the utilization is frequency-invariant, choose the new frequency to be
|
|
* proportional to it, that is
|
|
*
|
|
* next_freq = C * max_freq * util / max
|
|
*
|
|
* Otherwise, approximate the would-be frequency-invariant utilization by
|
|
* util_raw * (curr_freq / max_freq) which leads to
|
|
*
|
|
* next_freq = C * curr_freq * util_raw / max
|
|
*
|
|
* Take C = 1.25 for the frequency tipping point at (util / max) = 0.8.
|
|
*
|
|
* The lowest driver-supported frequency which is equal or greater than the raw
|
|
* next_freq (as calculated above) is returned, subject to policy min/max and
|
|
* cpufreq driver limitations.
|
|
*/
|
|
static unsigned int get_next_freq(struct sugov_policy *sg_policy,
|
|
unsigned long util, unsigned long max)
|
|
{
|
|
struct cpufreq_policy *policy = sg_policy->policy;
|
|
unsigned int freq = arch_scale_freq_invariant() ?
|
|
policy->cpuinfo.max_freq : policy->cur;
|
|
|
|
freq = map_util_freq(util, freq, max);
|
|
trace_sugov_next_freq(policy->cpu, util, max, freq);
|
|
|
|
if (freq == sg_policy->cached_raw_freq && !sg_policy->need_freq_update)
|
|
return sg_policy->next_freq;
|
|
|
|
sg_policy->need_freq_update = false;
|
|
sg_policy->prev_cached_raw_freq = sg_policy->cached_raw_freq;
|
|
sg_policy->cached_raw_freq = freq;
|
|
return cpufreq_driver_resolve_freq(policy, freq);
|
|
}
|
|
|
|
extern long
|
|
schedtune_cpu_margin_with(unsigned long util, int cpu, struct task_struct *p);
|
|
|
|
/*
|
|
* This function computes an effective utilization for the given CPU, to be
|
|
* used for frequency selection given the linear relation: f = u * f_max.
|
|
*
|
|
* The scheduler tracks the following metrics:
|
|
*
|
|
* cpu_util_{cfs,rt,dl,irq}()
|
|
* cpu_bw_dl()
|
|
*
|
|
* Where the cfs,rt and dl util numbers are tracked with the same metric and
|
|
* synchronized windows and are thus directly comparable.
|
|
*
|
|
* The @util parameter passed to this function is assumed to be the aggregation
|
|
* of RT and CFS util numbers. The cases of DL and IRQ are managed here.
|
|
*
|
|
* The cfs,rt,dl utilization are the running times measured with rq->clock_task
|
|
* which excludes things like IRQ and steal-time. These latter are then accrued
|
|
* in the irq utilization.
|
|
*
|
|
* The DL bandwidth number otoh is not a measured metric but a value computed
|
|
* based on the task model parameters and gives the minimal utilization
|
|
* required to meet deadlines.
|
|
*/
|
|
unsigned long schedutil_cpu_util(int cpu, unsigned long util_cfs,
|
|
unsigned long max, enum schedutil_type type,
|
|
struct task_struct *p)
|
|
{
|
|
unsigned long dl_util, util, irq;
|
|
struct rq *rq = cpu_rq(cpu);
|
|
|
|
if (!uclamp_is_used() &&
|
|
type == FREQUENCY_UTIL && rt_rq_is_runnable(&rq->rt)) {
|
|
return max;
|
|
}
|
|
|
|
/*
|
|
* Early check to see if IRQ/steal time saturates the CPU, can be
|
|
* because of inaccuracies in how we track these -- see
|
|
* update_irq_load_avg().
|
|
*/
|
|
irq = cpu_util_irq(rq);
|
|
if (unlikely(irq >= max))
|
|
return max;
|
|
|
|
/*
|
|
* Because the time spend on RT/DL tasks is visible as 'lost' time to
|
|
* CFS tasks and we use the same metric to track the effective
|
|
* utilization (PELT windows are synchronized) we can directly add them
|
|
* to obtain the CPU's actual utilization.
|
|
*
|
|
* CFS and RT utilization can be boosted or capped, depending on
|
|
* utilization clamp constraints requested by currently RUNNABLE
|
|
* tasks.
|
|
* When there are no CFS RUNNABLE tasks, clamps are released and
|
|
* frequency will be gracefully reduced with the utilization decay.
|
|
*/
|
|
util = util_cfs + cpu_util_rt(rq);
|
|
if (type == FREQUENCY_UTIL)
|
|
#ifdef CONFIG_SCHED_TUNE
|
|
util += schedtune_cpu_margin_with(util, cpu, p);
|
|
#else
|
|
util = uclamp_rq_util_with(rq, util, p);
|
|
#endif
|
|
|
|
dl_util = cpu_util_dl(rq);
|
|
|
|
/*
|
|
* For frequency selection we do not make cpu_util_dl() a permanent part
|
|
* of this sum because we want to use cpu_bw_dl() later on, but we need
|
|
* to check if the CFS+RT+DL sum is saturated (ie. no idle time) such
|
|
* that we select f_max when there is no idle time.
|
|
*
|
|
* NOTE: numerical errors or stop class might cause us to not quite hit
|
|
* saturation when we should -- something for later.
|
|
*/
|
|
if (util + dl_util >= max)
|
|
return max;
|
|
|
|
/*
|
|
* OTOH, for energy computation we need the estimated running time, so
|
|
* include util_dl and ignore dl_bw.
|
|
*/
|
|
if (type == ENERGY_UTIL)
|
|
util += dl_util;
|
|
|
|
/*
|
|
* There is still idle time; further improve the number by using the
|
|
* irq metric. Because IRQ/steal time is hidden from the task clock we
|
|
* need to scale the task numbers:
|
|
*
|
|
* 1 - irq
|
|
* U' = irq + ------- * U
|
|
* max
|
|
*/
|
|
util = scale_irq_capacity(util, irq, max);
|
|
util += irq;
|
|
|
|
/*
|
|
* Bandwidth required by DEADLINE must always be granted while, for
|
|
* FAIR and RT, we use blocked utilization of IDLE CPUs as a mechanism
|
|
* to gracefully reduce the frequency when no tasks show up for longer
|
|
* periods of time.
|
|
*
|
|
* Ideally we would like to set bw_dl as min/guaranteed freq and util +
|
|
* bw_dl as requested freq. However, cpufreq is not yet ready for such
|
|
* an interface. So, we only do the latter for now.
|
|
*/
|
|
if (type == FREQUENCY_UTIL)
|
|
util += cpu_bw_dl(rq);
|
|
|
|
return min(max, util);
|
|
}
|
|
|
|
#ifdef CONFIG_SCHED_WALT
|
|
static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
|
|
{
|
|
struct rq *rq = cpu_rq(sg_cpu->cpu);
|
|
unsigned long max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
|
|
|
|
sg_cpu->max = max;
|
|
sg_cpu->bw_dl = cpu_bw_dl(rq);
|
|
|
|
return stune_util(sg_cpu->cpu, 0, &sg_cpu->walt_load);
|
|
}
|
|
#else
|
|
static unsigned long sugov_get_util(struct sugov_cpu *sg_cpu)
|
|
{
|
|
struct rq *rq = cpu_rq(sg_cpu->cpu);
|
|
|
|
unsigned long util_cfs = cpu_util_cfs(rq);
|
|
unsigned long max = arch_scale_cpu_capacity(NULL, sg_cpu->cpu);
|
|
|
|
sg_cpu->max = max;
|
|
sg_cpu->bw_dl = cpu_bw_dl(rq);
|
|
|
|
return schedutil_cpu_util(sg_cpu->cpu, util_cfs, max,
|
|
FREQUENCY_UTIL, NULL);
|
|
}
|
|
#endif
|
|
|
|
/**
|
|
* sugov_iowait_reset() - Reset the IO boost status of a CPU.
|
|
* @sg_cpu: the sugov data for the CPU to boost
|
|
* @time: the update time from the caller
|
|
* @set_iowait_boost: true if an IO boost has been requested
|
|
*
|
|
* The IO wait boost of a task is disabled after a tick since the last update
|
|
* of a CPU. If a new IO wait boost is requested after more then a tick, then
|
|
* we enable the boost starting from the minimum frequency, which improves
|
|
* energy efficiency by ignoring sporadic wakeups from IO.
|
|
*/
|
|
static bool sugov_iowait_reset(struct sugov_cpu *sg_cpu, u64 time,
|
|
bool set_iowait_boost)
|
|
{
|
|
s64 delta_ns = time - sg_cpu->last_update;
|
|
|
|
/* Reset boost only if a tick has elapsed since last request */
|
|
if (delta_ns <= TICK_NSEC)
|
|
return false;
|
|
|
|
sg_cpu->iowait_boost = set_iowait_boost ? sg_cpu->min : 0;
|
|
sg_cpu->iowait_boost_pending = set_iowait_boost;
|
|
|
|
return true;
|
|
}
|
|
|
|
/**
|
|
* sugov_iowait_boost() - Updates the IO boost status of a CPU.
|
|
* @sg_cpu: the sugov data for the CPU to boost
|
|
* @time: the update time from the caller
|
|
* @flags: SCHED_CPUFREQ_IOWAIT if the task is waking up after an IO wait
|
|
*
|
|
* Each time a task wakes up after an IO operation, the CPU utilization can be
|
|
* boosted to a certain utilization which doubles at each "frequent and
|
|
* successive" wakeup from IO, ranging from the utilization of the minimum
|
|
* OPP to the utilization of the maximum OPP.
|
|
* To keep doubling, an IO boost has to be requested at least once per tick,
|
|
* otherwise we restart from the utilization of the minimum OPP.
|
|
*/
|
|
static void sugov_iowait_boost(struct sugov_cpu *sg_cpu, u64 time,
|
|
unsigned int flags)
|
|
{
|
|
bool set_iowait_boost = flags & SCHED_CPUFREQ_IOWAIT;
|
|
|
|
/* Reset boost if the CPU appears to have been idle enough */
|
|
if (sg_cpu->iowait_boost &&
|
|
sugov_iowait_reset(sg_cpu, time, set_iowait_boost))
|
|
return;
|
|
|
|
/* Boost only tasks waking up after IO */
|
|
if (!set_iowait_boost)
|
|
return;
|
|
|
|
/* Ensure boost doubles only one time at each request */
|
|
if (sg_cpu->iowait_boost_pending)
|
|
return;
|
|
sg_cpu->iowait_boost_pending = true;
|
|
|
|
/* Double the boost at each request */
|
|
if (sg_cpu->iowait_boost) {
|
|
sg_cpu->iowait_boost =
|
|
min_t(unsigned int, sg_cpu->iowait_boost << 1, SCHED_CAPACITY_SCALE);
|
|
return;
|
|
}
|
|
|
|
/* First wakeup after IO: start with minimum boost */
|
|
sg_cpu->iowait_boost = sg_cpu->min;
|
|
}
|
|
|
|
/**
|
|
* sugov_iowait_apply() - Apply the IO boost to a CPU.
|
|
* @sg_cpu: the sugov data for the cpu to boost
|
|
* @time: the update time from the caller
|
|
* @util: the utilization to (eventually) boost
|
|
* @max: the maximum value the utilization can be boosted to
|
|
*
|
|
* A CPU running a task which woken up after an IO operation can have its
|
|
* utilization boosted to speed up the completion of those IO operations.
|
|
* The IO boost value is increased each time a task wakes up from IO, in
|
|
* sugov_iowait_apply(), and it's instead decreased by this function,
|
|
* each time an increase has not been requested (!iowait_boost_pending).
|
|
*
|
|
* A CPU which also appears to have been idle for at least one tick has also
|
|
* its IO boost utilization reset.
|
|
*
|
|
* This mechanism is designed to boost high frequently IO waiting tasks, while
|
|
* being more conservative on tasks which does sporadic IO operations.
|
|
*/
|
|
static unsigned long sugov_iowait_apply(struct sugov_cpu *sg_cpu, u64 time,
|
|
unsigned long util, unsigned long max)
|
|
{
|
|
unsigned long boost;
|
|
|
|
/* No boost currently required */
|
|
if (!sg_cpu->iowait_boost)
|
|
return util;
|
|
|
|
/* Reset boost if the CPU appears to have been idle enough */
|
|
if (sugov_iowait_reset(sg_cpu, time, false))
|
|
return util;
|
|
|
|
if (!sg_cpu->iowait_boost_pending) {
|
|
/*
|
|
* No boost pending; reduce the boost value.
|
|
*/
|
|
sg_cpu->iowait_boost >>= 1;
|
|
if (sg_cpu->iowait_boost < sg_cpu->min) {
|
|
sg_cpu->iowait_boost = 0;
|
|
return util;
|
|
}
|
|
}
|
|
|
|
sg_cpu->iowait_boost_pending = false;
|
|
|
|
/*
|
|
* @util is already in capacity scale; convert iowait_boost
|
|
* into the same scale so we can compare.
|
|
*/
|
|
boost = (sg_cpu->iowait_boost * max) >> SCHED_CAPACITY_SHIFT;
|
|
return max(boost, util);
|
|
}
|
|
|
|
#ifdef CONFIG_NO_HZ_COMMON
|
|
static bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu)
|
|
{
|
|
unsigned long idle_calls = tick_nohz_get_idle_calls_cpu(sg_cpu->cpu);
|
|
bool ret = idle_calls == sg_cpu->saved_idle_calls;
|
|
|
|
sg_cpu->saved_idle_calls = idle_calls;
|
|
return ret;
|
|
}
|
|
#else
|
|
static inline bool sugov_cpu_is_busy(struct sugov_cpu *sg_cpu) { return false; }
|
|
#endif /* CONFIG_NO_HZ_COMMON */
|
|
|
|
#define NL_RATIO 75
|
|
#define DEFAULT_HISPEED_LOAD 90
|
|
#define DEFAULT_CPU0_RTG_BOOST_FREQ 1000000
|
|
#define DEFAULT_CPU4_RTG_BOOST_FREQ 0
|
|
#define DEFAULT_CPU7_RTG_BOOST_FREQ 0
|
|
static void sugov_walt_adjust(struct sugov_cpu *sg_cpu, unsigned long *util,
|
|
unsigned long *max)
|
|
{
|
|
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
|
bool is_migration = sg_cpu->flags & SCHED_CPUFREQ_INTERCLUSTER_MIG;
|
|
bool is_rtg_boost = sg_cpu->walt_load.rtgb_active;
|
|
unsigned long nl = sg_cpu->walt_load.nl;
|
|
unsigned long cpu_util = sg_cpu->util;
|
|
bool is_hiload;
|
|
unsigned long pl = sg_cpu->walt_load.pl;
|
|
|
|
if (use_pelt())
|
|
return;
|
|
|
|
if (is_rtg_boost)
|
|
*util = max(*util, sg_policy->rtg_boost_util);
|
|
|
|
is_hiload = (cpu_util >= mult_frac(sg_policy->avg_cap,
|
|
sg_policy->tunables->hispeed_load,
|
|
100));
|
|
|
|
if (is_hiload && !is_migration)
|
|
*util = max(*util, sg_policy->hispeed_util);
|
|
|
|
if (is_hiload && nl >= mult_frac(cpu_util, NL_RATIO, 100))
|
|
*util = *max;
|
|
|
|
if (sg_policy->tunables->pl) {
|
|
if (conservative_pl())
|
|
pl = mult_frac(pl, TARGET_LOAD, 100);
|
|
*util = max(*util, pl);
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Make sugov_should_update_freq() ignore the rate limit when DL
|
|
* has increased the utilization.
|
|
*/
|
|
static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu, struct sugov_policy *sg_policy)
|
|
{
|
|
if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_dl)
|
|
sg_policy->limits_changed = true;
|
|
}
|
|
|
|
static inline unsigned long target_util(struct sugov_policy *sg_policy,
|
|
unsigned int freq)
|
|
{
|
|
unsigned long util;
|
|
|
|
util = freq_to_util(sg_policy, freq);
|
|
util = mult_frac(util, TARGET_LOAD, 100);
|
|
return util;
|
|
}
|
|
|
|
static void sugov_update_single(struct update_util_data *hook, u64 time,
|
|
unsigned int flags)
|
|
{
|
|
struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util);
|
|
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
|
unsigned long util, max, hs_util, boost_util;
|
|
unsigned int next_f;
|
|
bool busy;
|
|
|
|
if (!sg_policy->tunables->pl && flags & SCHED_CPUFREQ_PL)
|
|
return;
|
|
|
|
sugov_iowait_boost(sg_cpu, time, flags);
|
|
sg_cpu->last_update = time;
|
|
|
|
ignore_dl_rate_limit(sg_cpu, sg_policy);
|
|
|
|
if (!sugov_should_update_freq(sg_policy, time))
|
|
return;
|
|
|
|
/* Limits may have changed, don't skip frequency update */
|
|
busy = use_pelt() && !sg_policy->need_freq_update &&
|
|
sugov_cpu_is_busy(sg_cpu);
|
|
|
|
sg_cpu->util = util = sugov_get_util(sg_cpu);
|
|
max = sg_cpu->max;
|
|
sg_cpu->flags = flags;
|
|
|
|
if (sg_policy->max != max) {
|
|
sg_policy->max = max;
|
|
hs_util = target_util(sg_policy,
|
|
sg_policy->tunables->hispeed_freq);
|
|
sg_policy->hispeed_util = hs_util;
|
|
|
|
boost_util = target_util(sg_policy,
|
|
sg_policy->tunables->rtg_boost_freq);
|
|
sg_policy->rtg_boost_util = boost_util;
|
|
}
|
|
|
|
util = sugov_iowait_apply(sg_cpu, time, util, max);
|
|
sugov_calc_avg_cap(sg_policy, sg_cpu->walt_load.ws,
|
|
sg_policy->policy->cur);
|
|
|
|
trace_sugov_util_update(sg_cpu->cpu, sg_cpu->util,
|
|
sg_policy->avg_cap, max, sg_cpu->walt_load.nl,
|
|
sg_cpu->walt_load.pl,
|
|
sg_cpu->walt_load.rtgb_active, flags);
|
|
|
|
sugov_walt_adjust(sg_cpu, &util, &max);
|
|
next_f = get_next_freq(sg_policy, util, max);
|
|
/*
|
|
* Do not reduce the frequency if the CPU has not been idle
|
|
* recently, as the reduction is likely to be premature then.
|
|
*/
|
|
if (busy && next_f < sg_policy->next_freq) {
|
|
next_f = sg_policy->next_freq;
|
|
|
|
/* Restore cached freq as next_freq has changed */
|
|
sg_policy->cached_raw_freq = sg_policy->prev_cached_raw_freq;
|
|
}
|
|
|
|
/*
|
|
* This code runs under rq->lock for the target CPU, so it won't run
|
|
* concurrently on two different CPUs for the same target and it is not
|
|
* necessary to acquire the lock in the fast switch case.
|
|
*/
|
|
if (sg_policy->policy->fast_switch_enabled) {
|
|
sugov_fast_switch(sg_policy, time, next_f);
|
|
} else {
|
|
raw_spin_lock(&sg_policy->update_lock);
|
|
sugov_deferred_update(sg_policy, time, next_f);
|
|
raw_spin_unlock(&sg_policy->update_lock);
|
|
}
|
|
}
|
|
|
|
static unsigned int sugov_next_freq_shared(struct sugov_cpu *sg_cpu, u64 time)
|
|
{
|
|
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
|
struct cpufreq_policy *policy = sg_policy->policy;
|
|
u64 last_freq_update_time = sg_policy->last_freq_update_time;
|
|
unsigned long util = 0, max = 1;
|
|
unsigned int j;
|
|
|
|
for_each_cpu(j, policy->cpus) {
|
|
struct sugov_cpu *j_sg_cpu = &per_cpu(sugov_cpu, j);
|
|
unsigned long j_util, j_max;
|
|
s64 delta_ns;
|
|
|
|
/*
|
|
* If the CPU utilization was last updated before the previous
|
|
* frequency update and the time elapsed between the last update
|
|
* of the CPU utilization and the last frequency update is long
|
|
* enough, don't take the CPU into account as it probably is
|
|
* idle now (and clear iowait_boost for it).
|
|
*/
|
|
delta_ns = last_freq_update_time - j_sg_cpu->last_update;
|
|
if (delta_ns > stale_ns) {
|
|
sugov_iowait_reset(j_sg_cpu, last_freq_update_time,
|
|
false);
|
|
continue;
|
|
}
|
|
|
|
/*
|
|
* If the util value for all CPUs in a policy is 0, just using >
|
|
* will result in a max value of 1. WALT stats can later update
|
|
* the aggregated util value, causing get_next_freq() to compute
|
|
* freq = max_freq * 1.25 * (util / max) for nonzero util,
|
|
* leading to spurious jumps to fmax.
|
|
*/
|
|
j_util = j_sg_cpu->util;
|
|
j_max = j_sg_cpu->max;
|
|
j_util = sugov_iowait_apply(j_sg_cpu, time, j_util, j_max);
|
|
|
|
if (j_util * max >= j_max * util) {
|
|
util = j_util;
|
|
max = j_max;
|
|
}
|
|
|
|
sugov_walt_adjust(j_sg_cpu, &util, &max);
|
|
}
|
|
|
|
return get_next_freq(sg_policy, util, max);
|
|
}
|
|
|
|
static void
|
|
sugov_update_shared(struct update_util_data *hook, u64 time, unsigned int flags)
|
|
{
|
|
struct sugov_cpu *sg_cpu = container_of(hook, struct sugov_cpu, update_util);
|
|
struct sugov_policy *sg_policy = sg_cpu->sg_policy;
|
|
unsigned long hs_util, boost_util;
|
|
unsigned int next_f;
|
|
|
|
if (!sg_policy->tunables->pl && flags & SCHED_CPUFREQ_PL)
|
|
return;
|
|
|
|
sg_cpu->util = sugov_get_util(sg_cpu);
|
|
sg_cpu->flags = flags;
|
|
raw_spin_lock(&sg_policy->update_lock);
|
|
|
|
if (sg_policy->max != sg_cpu->max) {
|
|
sg_policy->max = sg_cpu->max;
|
|
hs_util = target_util(sg_policy,
|
|
sg_policy->tunables->hispeed_freq);
|
|
sg_policy->hispeed_util = hs_util;
|
|
|
|
boost_util = target_util(sg_policy,
|
|
sg_policy->tunables->rtg_boost_freq);
|
|
sg_policy->rtg_boost_util = boost_util;
|
|
}
|
|
|
|
sugov_iowait_boost(sg_cpu, time, flags);
|
|
sg_cpu->last_update = time;
|
|
|
|
sugov_calc_avg_cap(sg_policy, sg_cpu->walt_load.ws,
|
|
sg_policy->policy->cur);
|
|
ignore_dl_rate_limit(sg_cpu, sg_policy);
|
|
|
|
trace_sugov_util_update(sg_cpu->cpu, sg_cpu->util, sg_policy->avg_cap,
|
|
sg_cpu->max, sg_cpu->walt_load.nl,
|
|
sg_cpu->walt_load.pl,
|
|
sg_cpu->walt_load.rtgb_active, flags);
|
|
|
|
if (sugov_should_update_freq(sg_policy, time) &&
|
|
!(flags & SCHED_CPUFREQ_CONTINUE)) {
|
|
next_f = sugov_next_freq_shared(sg_cpu, time);
|
|
|
|
if (sg_policy->policy->fast_switch_enabled)
|
|
sugov_fast_switch(sg_policy, time, next_f);
|
|
else
|
|
sugov_deferred_update(sg_policy, time, next_f);
|
|
}
|
|
|
|
raw_spin_unlock(&sg_policy->update_lock);
|
|
}
|
|
|
|
static void sugov_work(struct kthread_work *work)
|
|
{
|
|
struct sugov_policy *sg_policy = container_of(work, struct sugov_policy, work);
|
|
unsigned int freq;
|
|
unsigned long flags;
|
|
|
|
/*
|
|
* Hold sg_policy->update_lock shortly to handle the case where:
|
|
* incase sg_policy->next_freq is read here, and then updated by
|
|
* sugov_deferred_update() just before work_in_progress is set to false
|
|
* here, we may miss queueing the new update.
|
|
*
|
|
* Note: If a work was queued after the update_lock is released,
|
|
* sugov_work() will just be called again by kthread_work code; and the
|
|
* request will be proceed before the sugov thread sleeps.
|
|
*/
|
|
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
|
|
freq = sg_policy->next_freq;
|
|
if (use_pelt())
|
|
sg_policy->work_in_progress = false;
|
|
sugov_track_cycles(sg_policy, sg_policy->policy->cur,
|
|
ktime_get_ns());
|
|
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
|
|
|
|
mutex_lock(&sg_policy->work_lock);
|
|
__cpufreq_driver_target(sg_policy->policy, freq, CPUFREQ_RELATION_L);
|
|
mutex_unlock(&sg_policy->work_lock);
|
|
}
|
|
|
|
static void sugov_irq_work(struct irq_work *irq_work)
|
|
{
|
|
struct sugov_policy *sg_policy;
|
|
|
|
sg_policy = container_of(irq_work, struct sugov_policy, irq_work);
|
|
|
|
kthread_queue_work(&sg_policy->worker, &sg_policy->work);
|
|
}
|
|
|
|
/************************** sysfs interface ************************/
|
|
|
|
static struct sugov_tunables *global_tunables;
|
|
static DEFINE_MUTEX(global_tunables_lock);
|
|
|
|
static inline struct sugov_tunables *to_sugov_tunables(struct gov_attr_set *attr_set)
|
|
{
|
|
return container_of(attr_set, struct sugov_tunables, attr_set);
|
|
}
|
|
|
|
static DEFINE_MUTEX(min_rate_lock);
|
|
|
|
static void update_min_rate_limit_ns(struct sugov_policy *sg_policy)
|
|
{
|
|
mutex_lock(&min_rate_lock);
|
|
sg_policy->min_rate_limit_ns = min(sg_policy->up_rate_delay_ns,
|
|
sg_policy->down_rate_delay_ns);
|
|
mutex_unlock(&min_rate_lock);
|
|
}
|
|
|
|
static ssize_t up_rate_limit_us_show(struct gov_attr_set *attr_set, char *buf)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
return scnprintf(buf, PAGE_SIZE, "%u\n", tunables->up_rate_limit_us);
|
|
}
|
|
|
|
static ssize_t down_rate_limit_us_show(struct gov_attr_set *attr_set, char *buf)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
return scnprintf(buf, PAGE_SIZE, "%u\n", tunables->down_rate_limit_us);
|
|
}
|
|
|
|
static ssize_t up_rate_limit_us_store(struct gov_attr_set *attr_set,
|
|
const char *buf, size_t count)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
struct sugov_policy *sg_policy;
|
|
unsigned int rate_limit_us;
|
|
|
|
if (kstrtouint(buf, 10, &rate_limit_us))
|
|
return -EINVAL;
|
|
|
|
tunables->up_rate_limit_us = rate_limit_us;
|
|
|
|
list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) {
|
|
sg_policy->up_rate_delay_ns = rate_limit_us * NSEC_PER_USEC;
|
|
update_min_rate_limit_ns(sg_policy);
|
|
}
|
|
|
|
return count;
|
|
}
|
|
|
|
static ssize_t down_rate_limit_us_store(struct gov_attr_set *attr_set,
|
|
const char *buf, size_t count)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
struct sugov_policy *sg_policy;
|
|
unsigned int rate_limit_us;
|
|
|
|
if (kstrtouint(buf, 10, &rate_limit_us))
|
|
return -EINVAL;
|
|
|
|
tunables->down_rate_limit_us = rate_limit_us;
|
|
|
|
list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) {
|
|
sg_policy->down_rate_delay_ns = rate_limit_us * NSEC_PER_USEC;
|
|
update_min_rate_limit_ns(sg_policy);
|
|
}
|
|
|
|
return count;
|
|
}
|
|
|
|
static struct governor_attr up_rate_limit_us = __ATTR_RW(up_rate_limit_us);
|
|
static struct governor_attr down_rate_limit_us = __ATTR_RW(down_rate_limit_us);
|
|
|
|
static ssize_t hispeed_load_show(struct gov_attr_set *attr_set, char *buf)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
return scnprintf(buf, PAGE_SIZE, "%u\n", tunables->hispeed_load);
|
|
}
|
|
|
|
static ssize_t hispeed_load_store(struct gov_attr_set *attr_set,
|
|
const char *buf, size_t count)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
if (kstrtouint(buf, 10, &tunables->hispeed_load))
|
|
return -EINVAL;
|
|
|
|
tunables->hispeed_load = min(100U, tunables->hispeed_load);
|
|
|
|
return count;
|
|
}
|
|
|
|
static ssize_t hispeed_freq_show(struct gov_attr_set *attr_set, char *buf)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
return scnprintf(buf, PAGE_SIZE, "%u\n", tunables->hispeed_freq);
|
|
}
|
|
|
|
static ssize_t hispeed_freq_store(struct gov_attr_set *attr_set,
|
|
const char *buf, size_t count)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
unsigned int val;
|
|
struct sugov_policy *sg_policy;
|
|
unsigned long hs_util;
|
|
unsigned long flags;
|
|
|
|
if (kstrtouint(buf, 10, &val))
|
|
return -EINVAL;
|
|
|
|
tunables->hispeed_freq = val;
|
|
list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) {
|
|
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
|
|
hs_util = target_util(sg_policy,
|
|
sg_policy->tunables->hispeed_freq);
|
|
sg_policy->hispeed_util = hs_util;
|
|
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
|
|
}
|
|
|
|
return count;
|
|
}
|
|
|
|
static ssize_t rtg_boost_freq_show(struct gov_attr_set *attr_set, char *buf)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
return scnprintf(buf, PAGE_SIZE, "%u\n", tunables->rtg_boost_freq);
|
|
}
|
|
|
|
static ssize_t rtg_boost_freq_store(struct gov_attr_set *attr_set,
|
|
const char *buf, size_t count)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
unsigned int val;
|
|
struct sugov_policy *sg_policy;
|
|
unsigned long boost_util;
|
|
unsigned long flags;
|
|
|
|
if (kstrtouint(buf, 10, &val))
|
|
return -EINVAL;
|
|
|
|
tunables->rtg_boost_freq = val;
|
|
list_for_each_entry(sg_policy, &attr_set->policy_list, tunables_hook) {
|
|
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
|
|
boost_util = target_util(sg_policy,
|
|
sg_policy->tunables->rtg_boost_freq);
|
|
sg_policy->rtg_boost_util = boost_util;
|
|
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
|
|
}
|
|
|
|
return count;
|
|
}
|
|
|
|
static ssize_t pl_show(struct gov_attr_set *attr_set, char *buf)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
return scnprintf(buf, PAGE_SIZE, "%u\n", tunables->pl);
|
|
}
|
|
|
|
static ssize_t pl_store(struct gov_attr_set *attr_set, const char *buf,
|
|
size_t count)
|
|
{
|
|
struct sugov_tunables *tunables = to_sugov_tunables(attr_set);
|
|
|
|
if (kstrtobool(buf, &tunables->pl))
|
|
return -EINVAL;
|
|
|
|
return count;
|
|
}
|
|
|
|
static struct governor_attr hispeed_load = __ATTR_RW(hispeed_load);
|
|
static struct governor_attr hispeed_freq = __ATTR_RW(hispeed_freq);
|
|
static struct governor_attr rtg_boost_freq = __ATTR_RW(rtg_boost_freq);
|
|
static struct governor_attr pl = __ATTR_RW(pl);
|
|
|
|
static struct attribute *sugov_attributes[] = {
|
|
&up_rate_limit_us.attr,
|
|
&down_rate_limit_us.attr,
|
|
&hispeed_load.attr,
|
|
&hispeed_freq.attr,
|
|
&rtg_boost_freq.attr,
|
|
&pl.attr,
|
|
NULL
|
|
};
|
|
|
|
static void sugov_tunables_free(struct kobject *kobj)
|
|
{
|
|
struct gov_attr_set *attr_set = container_of(kobj, struct gov_attr_set, kobj);
|
|
|
|
kfree(to_sugov_tunables(attr_set));
|
|
}
|
|
|
|
static struct kobj_type sugov_tunables_ktype = {
|
|
.default_attrs = sugov_attributes,
|
|
.sysfs_ops = &governor_sysfs_ops,
|
|
.release = &sugov_tunables_free,
|
|
};
|
|
|
|
/********************** cpufreq governor interface *********************/
|
|
|
|
static struct cpufreq_governor schedutil_gov;
|
|
|
|
static struct sugov_policy *sugov_policy_alloc(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy;
|
|
|
|
sg_policy = kzalloc(sizeof(*sg_policy), GFP_KERNEL);
|
|
if (!sg_policy)
|
|
return NULL;
|
|
|
|
sg_policy->policy = policy;
|
|
raw_spin_lock_init(&sg_policy->update_lock);
|
|
return sg_policy;
|
|
}
|
|
|
|
static void sugov_policy_free(struct sugov_policy *sg_policy)
|
|
{
|
|
kfree(sg_policy);
|
|
}
|
|
|
|
static int sugov_kthread_create(struct sugov_policy *sg_policy)
|
|
{
|
|
struct task_struct *thread;
|
|
struct sched_param param = { .sched_priority = MAX_USER_RT_PRIO / 2 };
|
|
struct cpufreq_policy *policy = sg_policy->policy;
|
|
int ret;
|
|
|
|
/* kthread only required for slow path */
|
|
if (policy->fast_switch_enabled)
|
|
return 0;
|
|
|
|
kthread_init_work(&sg_policy->work, sugov_work);
|
|
kthread_init_worker(&sg_policy->worker);
|
|
thread = kthread_create(kthread_worker_fn, &sg_policy->worker,
|
|
"sugov:%d",
|
|
cpumask_first(policy->related_cpus));
|
|
if (IS_ERR(thread)) {
|
|
pr_err("failed to create sugov thread: %ld\n", PTR_ERR(thread));
|
|
return PTR_ERR(thread);
|
|
}
|
|
|
|
ret = sched_setscheduler_nocheck(thread, SCHED_FIFO, ¶m);
|
|
if (ret) {
|
|
kthread_stop(thread);
|
|
pr_warn("%s: failed to set SCHED_FIFO\n", __func__);
|
|
return ret;
|
|
}
|
|
|
|
sg_policy->thread = thread;
|
|
kthread_bind_mask(thread, policy->related_cpus);
|
|
init_irq_work(&sg_policy->irq_work, sugov_irq_work);
|
|
mutex_init(&sg_policy->work_lock);
|
|
|
|
wake_up_process(thread);
|
|
|
|
return 0;
|
|
}
|
|
|
|
static void sugov_kthread_stop(struct sugov_policy *sg_policy)
|
|
{
|
|
/* kthread only required for slow path */
|
|
if (sg_policy->policy->fast_switch_enabled)
|
|
return;
|
|
|
|
kthread_flush_worker(&sg_policy->worker);
|
|
kthread_stop(sg_policy->thread);
|
|
mutex_destroy(&sg_policy->work_lock);
|
|
}
|
|
|
|
static struct sugov_tunables *sugov_tunables_alloc(struct sugov_policy *sg_policy)
|
|
{
|
|
struct sugov_tunables *tunables;
|
|
|
|
tunables = kzalloc(sizeof(*tunables), GFP_KERNEL);
|
|
if (tunables) {
|
|
gov_attr_set_init(&tunables->attr_set, &sg_policy->tunables_hook);
|
|
if (!have_governor_per_policy())
|
|
global_tunables = tunables;
|
|
}
|
|
return tunables;
|
|
}
|
|
|
|
static void sugov_tunables_save(struct cpufreq_policy *policy,
|
|
struct sugov_tunables *tunables)
|
|
{
|
|
int cpu;
|
|
struct sugov_tunables *cached = per_cpu(cached_tunables, policy->cpu);
|
|
|
|
if (!have_governor_per_policy())
|
|
return;
|
|
|
|
if (!cached) {
|
|
cached = kzalloc(sizeof(*tunables), GFP_KERNEL);
|
|
if (!cached)
|
|
return;
|
|
|
|
for_each_cpu(cpu, policy->related_cpus)
|
|
per_cpu(cached_tunables, cpu) = cached;
|
|
}
|
|
|
|
cached->pl = tunables->pl;
|
|
cached->hispeed_load = tunables->hispeed_load;
|
|
cached->rtg_boost_freq = tunables->rtg_boost_freq;
|
|
cached->hispeed_freq = tunables->hispeed_freq;
|
|
cached->up_rate_limit_us = tunables->up_rate_limit_us;
|
|
cached->down_rate_limit_us = tunables->down_rate_limit_us;
|
|
}
|
|
|
|
static void sugov_clear_global_tunables(void)
|
|
{
|
|
if (!have_governor_per_policy())
|
|
global_tunables = NULL;
|
|
}
|
|
|
|
static void sugov_tunables_restore(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy = policy->governor_data;
|
|
struct sugov_tunables *tunables = sg_policy->tunables;
|
|
struct sugov_tunables *cached = per_cpu(cached_tunables, policy->cpu);
|
|
|
|
if (!cached)
|
|
return;
|
|
|
|
tunables->pl = cached->pl;
|
|
tunables->hispeed_load = cached->hispeed_load;
|
|
tunables->rtg_boost_freq = cached->rtg_boost_freq;
|
|
tunables->hispeed_freq = cached->hispeed_freq;
|
|
tunables->up_rate_limit_us = cached->up_rate_limit_us;
|
|
tunables->down_rate_limit_us = cached->down_rate_limit_us;
|
|
}
|
|
|
|
static int sugov_init(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy;
|
|
struct sugov_tunables *tunables;
|
|
unsigned long util;
|
|
int ret = 0;
|
|
|
|
/* State should be equivalent to EXIT */
|
|
if (policy->governor_data)
|
|
return -EBUSY;
|
|
|
|
cpufreq_enable_fast_switch(policy);
|
|
|
|
sg_policy = sugov_policy_alloc(policy);
|
|
if (!sg_policy) {
|
|
ret = -ENOMEM;
|
|
goto disable_fast_switch;
|
|
}
|
|
|
|
ret = sugov_kthread_create(sg_policy);
|
|
if (ret)
|
|
goto free_sg_policy;
|
|
|
|
mutex_lock(&global_tunables_lock);
|
|
|
|
if (global_tunables) {
|
|
if (WARN_ON(have_governor_per_policy())) {
|
|
ret = -EINVAL;
|
|
goto stop_kthread;
|
|
}
|
|
policy->governor_data = sg_policy;
|
|
sg_policy->tunables = global_tunables;
|
|
|
|
gov_attr_set_get(&global_tunables->attr_set, &sg_policy->tunables_hook);
|
|
goto out;
|
|
}
|
|
|
|
tunables = sugov_tunables_alloc(sg_policy);
|
|
if (!tunables) {
|
|
ret = -ENOMEM;
|
|
goto stop_kthread;
|
|
}
|
|
|
|
tunables->up_rate_limit_us = cpufreq_policy_transition_delay_us(policy);
|
|
tunables->down_rate_limit_us = cpufreq_policy_transition_delay_us(policy);
|
|
tunables->hispeed_load = DEFAULT_HISPEED_LOAD;
|
|
tunables->hispeed_freq = 0;
|
|
|
|
switch (policy->cpu) {
|
|
default:
|
|
case 0:
|
|
tunables->rtg_boost_freq = DEFAULT_CPU0_RTG_BOOST_FREQ;
|
|
break;
|
|
case 4:
|
|
tunables->rtg_boost_freq = DEFAULT_CPU4_RTG_BOOST_FREQ;
|
|
break;
|
|
case 7:
|
|
tunables->rtg_boost_freq = DEFAULT_CPU7_RTG_BOOST_FREQ;
|
|
break;
|
|
}
|
|
|
|
policy->governor_data = sg_policy;
|
|
sg_policy->tunables = tunables;
|
|
|
|
util = target_util(sg_policy, sg_policy->tunables->rtg_boost_freq);
|
|
sg_policy->rtg_boost_util = util;
|
|
|
|
stale_ns = sched_ravg_window + (sched_ravg_window >> 3);
|
|
|
|
sugov_tunables_restore(policy);
|
|
|
|
ret = kobject_init_and_add(&tunables->attr_set.kobj, &sugov_tunables_ktype,
|
|
get_governor_parent_kobj(policy), "%s",
|
|
schedutil_gov.name);
|
|
if (ret)
|
|
goto fail;
|
|
|
|
out:
|
|
mutex_unlock(&global_tunables_lock);
|
|
return 0;
|
|
|
|
fail:
|
|
kobject_put(&tunables->attr_set.kobj);
|
|
policy->governor_data = NULL;
|
|
sugov_clear_global_tunables();
|
|
|
|
stop_kthread:
|
|
sugov_kthread_stop(sg_policy);
|
|
mutex_unlock(&global_tunables_lock);
|
|
|
|
free_sg_policy:
|
|
sugov_policy_free(sg_policy);
|
|
|
|
disable_fast_switch:
|
|
cpufreq_disable_fast_switch(policy);
|
|
|
|
pr_err("initialization failed (error %d)\n", ret);
|
|
return ret;
|
|
}
|
|
|
|
static void sugov_exit(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy = policy->governor_data;
|
|
struct sugov_tunables *tunables = sg_policy->tunables;
|
|
unsigned int count;
|
|
|
|
mutex_lock(&global_tunables_lock);
|
|
|
|
count = gov_attr_set_put(&tunables->attr_set, &sg_policy->tunables_hook);
|
|
policy->governor_data = NULL;
|
|
if (!count) {
|
|
sugov_tunables_save(policy, tunables);
|
|
sugov_clear_global_tunables();
|
|
}
|
|
|
|
mutex_unlock(&global_tunables_lock);
|
|
|
|
sugov_kthread_stop(sg_policy);
|
|
sugov_policy_free(sg_policy);
|
|
cpufreq_disable_fast_switch(policy);
|
|
}
|
|
|
|
static int sugov_start(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy = policy->governor_data;
|
|
unsigned int cpu;
|
|
|
|
sg_policy->up_rate_delay_ns =
|
|
sg_policy->tunables->up_rate_limit_us * NSEC_PER_USEC;
|
|
sg_policy->down_rate_delay_ns =
|
|
sg_policy->tunables->down_rate_limit_us * NSEC_PER_USEC;
|
|
update_min_rate_limit_ns(sg_policy);
|
|
sg_policy->last_freq_update_time = 0;
|
|
sg_policy->next_freq = 0;
|
|
sg_policy->work_in_progress = false;
|
|
sg_policy->limits_changed = false;
|
|
sg_policy->need_freq_update = false;
|
|
sg_policy->cached_raw_freq = 0;
|
|
sg_policy->prev_cached_raw_freq = 0;
|
|
|
|
for_each_cpu(cpu, policy->cpus) {
|
|
struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
|
|
|
|
memset(sg_cpu, 0, sizeof(*sg_cpu));
|
|
sg_cpu->cpu = cpu;
|
|
sg_cpu->sg_policy = sg_policy;
|
|
sg_cpu->min =
|
|
(SCHED_CAPACITY_SCALE * policy->cpuinfo.min_freq) /
|
|
policy->cpuinfo.max_freq;
|
|
}
|
|
|
|
for_each_cpu(cpu, policy->cpus) {
|
|
struct sugov_cpu *sg_cpu = &per_cpu(sugov_cpu, cpu);
|
|
|
|
cpufreq_add_update_util_hook(cpu, &sg_cpu->update_util,
|
|
policy_is_shared(policy) ?
|
|
sugov_update_shared :
|
|
sugov_update_single);
|
|
}
|
|
return 0;
|
|
}
|
|
|
|
static void sugov_stop(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy = policy->governor_data;
|
|
unsigned int cpu;
|
|
|
|
for_each_cpu(cpu, policy->cpus)
|
|
cpufreq_remove_update_util_hook(cpu);
|
|
|
|
synchronize_sched();
|
|
|
|
if (!policy->fast_switch_enabled) {
|
|
irq_work_sync(&sg_policy->irq_work);
|
|
kthread_cancel_work_sync(&sg_policy->work);
|
|
}
|
|
}
|
|
|
|
static void sugov_limits(struct cpufreq_policy *policy)
|
|
{
|
|
struct sugov_policy *sg_policy = policy->governor_data;
|
|
unsigned long flags, now;
|
|
unsigned int freq;
|
|
|
|
if (!policy->fast_switch_enabled) {
|
|
mutex_lock(&sg_policy->work_lock);
|
|
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
|
|
sugov_track_cycles(sg_policy, sg_policy->policy->cur,
|
|
ktime_get_ns());
|
|
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
|
|
cpufreq_policy_apply_limits(policy);
|
|
mutex_unlock(&sg_policy->work_lock);
|
|
} else {
|
|
raw_spin_lock_irqsave(&sg_policy->update_lock, flags);
|
|
freq = policy->cur;
|
|
now = ktime_get_ns();
|
|
|
|
/*
|
|
* cpufreq_driver_resolve_freq() has a clamp, so we do not need
|
|
* to do any sort of additional validation here.
|
|
*/
|
|
freq = cpufreq_driver_resolve_freq(policy, freq);
|
|
sg_policy->cached_raw_freq = freq;
|
|
sugov_fast_switch(sg_policy, now, freq);
|
|
raw_spin_unlock_irqrestore(&sg_policy->update_lock, flags);
|
|
}
|
|
|
|
sg_policy->limits_changed = true;
|
|
}
|
|
|
|
static struct cpufreq_governor schedutil_gov = {
|
|
.name = "schedutil",
|
|
.owner = THIS_MODULE,
|
|
.dynamic_switching = true,
|
|
.init = sugov_init,
|
|
.exit = sugov_exit,
|
|
.start = sugov_start,
|
|
.stop = sugov_stop,
|
|
.limits = sugov_limits,
|
|
};
|
|
|
|
#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL
|
|
struct cpufreq_governor *cpufreq_default_governor(void)
|
|
{
|
|
return &schedutil_gov;
|
|
}
|
|
#endif
|
|
|
|
static int __init sugov_register(void)
|
|
{
|
|
return cpufreq_register_governor(&schedutil_gov);
|
|
}
|
|
fs_initcall(sugov_register);
|