Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
SUSE:ALP:Source:Standard:0.1
tuned
tuned-2.19.0.29+git.b894a3e.obscpio
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File tuned-2.19.0.29+git.b894a3e.obscpio of Package tuned
07070100000000000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002100000000tuned-2.19.0.29+git.b894a3e/.fmf07070100000001000081A40000000000000000000000016391BC3A00000002000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/.fmf/version1 07070100000002000041ED0000000000000000000000036391BC3A00000000000000000000000000000000000000000000002400000000tuned-2.19.0.29+git.b894a3e/.github07070100000003000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/.github/workflows07070100000004000081A40000000000000000000000016391BC3A00000346000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/.github/workflows/codeql.ymlname: "CodeQL" on: push: branches: [ "master" ] pull_request: branches: [ "master" ] schedule: - cron: "48 13 * * 1" jobs: analyze: name: Analyze runs-on: ubuntu-latest permissions: actions: read contents: read security-events: write strategy: fail-fast: false matrix: language: [ python ] steps: - name: Checkout uses: actions/checkout@v3 - name: Initialize CodeQL uses: github/codeql-action/init@v2 with: languages: ${{ matrix.language }} queries: +security-and-quality - name: Autobuild uses: github/codeql-action/autobuild@v2 - name: Perform CodeQL Analysis uses: github/codeql-action/analyze@v2 with: category: "/language:${{ matrix.language }}" 07070100000005000081A40000000000000000000000016391BC3A00000026000000000000000000000000000000000000002700000000tuned-2.19.0.29+git.b894a3e/.gitignore*.pyc *.pyo tuned-*.tar.bz2 *~ *.html 07070100000006000081A40000000000000000000000016391BC3A00000169000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/.packit.yamlsrpm_build_deps: [] upstream_tag_template: v{version} jobs: - job: copr_build trigger: pull_request targets: - fedora-all - epel-7-x86_64 - epel-8-x86_64 - centos-stream-9-x86_64 - job: tests trigger: pull_request targets: - fedora-all - epel-7-x86_64 - epel-8-x86_64 - centos-stream-9-x86_64 files_to_sync: - tuned.spec - .packit.yaml 07070100000007000081A40000000000000000000000016391BC3A00000413000000000000000000000000000000000000002500000000tuned-2.19.0.29+git.b894a3e/00_tuned#! /bin/sh set -e # grub-mkconfig helper script. # Copyright (C) 2014 Red Hat, Inc # Author: Jaroslav Škarvada <jskarvad@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # tunedcfgdir=/etc/tuned tuned_bootcmdline_file=$tunedcfgdir/bootcmdline . $tuned_bootcmdline_file echo "set tuned_params=\"$TUNED_BOOT_CMDLINE\"" echo "set tuned_initrd=\"$TUNED_BOOT_INITRD_ADD\"" 07070100000008000081ED0000000000000000000000016391BC3A00000578000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/92-tuned.install#!/bin/bash COMMAND="$1" KERNEL_VERSION="$2" BOOT_DIR_ABS="$3" KERNEL_IMAGE="$4" if ! [[ $KERNEL_INSTALL_MACHINE_ID ]]; then exit 0 fi MACHINE_ID=$KERNEL_INSTALL_MACHINE_ID # with grub2 always /boot BOOT_ROOT="/boot" LOADER_ENTRIES="$BOOT_ROOT/loader/entries" [ -d "$LOADER_ENTRIES" ] || exit 0 [ "$COMMAND" = "add" ] || exit 0 ARCH=`uname -m` pushd "$LOADER_ENTRIES" &> /dev/null for f in `basename "$MACHINE_ID"`-*.conf; do # Skip non-files and rescue entries if [ ! -f "$f" -o "${f: -12}" == "-rescue.conf" ]; then continue fi # Skip boom managed entries if [[ "$f" =~ \w*-[0-9a-f]{7,}-.*-.*.conf ]]; then continue fi if [ "${ARCH:0:4}" = "s390" ]; then # On s390(x), the zipl bootloader doesn't support variables, # unpatch TuneD variables which could be there from the previous TuneD # versions grep -q '^\s*options\s\+.*\$tuned_params' "$f" && sed -i '/^\s*options\s\+/ s/\s\+\$tuned_params\b//g' "$f" grep -q '^\s*initrd\s\+.*\$tuned_initrd' "$f" && sed -i '/^\s*initrd\s\+/ s/\s\+\$tuned_initrd\b//g' "$f" else # Not on s390(x), add TuneD variables if they are not there grep -q '^\s*options\s\+.*\$tuned_params' "$f" || sed -i '/^\s*options\s\+/ s/\(.*\)/\1 \$tuned_params/' "$f" grep -q '^\s*initrd\s\+.*\$tuned_initrd' "$f" || sed -i '/^\s*initrd\s\+/ s/\(.*\)/\1 \$tuned_initrd/' "$f" fi done popd &> /dev/null exit 0 07070100000009000081A40000000000000000000000016391BC3A0000018B000000000000000000000000000000000000002400000000tuned-2.19.0.29+git.b894a3e/AUTHORSMaintainers: - Jaroslav Škarvada <jskarvad@redhat.com> Based on old tuned/ktune code by: - Philip Knirsch - Thomas Woerner Significant contributors: - Jan Včelák - Jan Kaluža Other contributors: - Petr Lautrbach - Marcela Mašláňová - Jarod Wilson - Jan Hutař - Arnaldo Carvalho de Melo <acme@redhat.com> - perf code for plugin_scheduler Icon: - Mariia Leonova <mleonova@redhat.com> 0707010000000A000081A40000000000000000000000016391BC3A00000C54000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/CONTRIBUTING.mdContributing to TuneD ===================== Submitting patches ------------------ All patches should be based on the most recent revision of TuneD, which is available on [GitHub](https://github.com/redhat-performance/tuned). Patches should be created using `git` and the commit message should generally consist of three parts: 1. The first line, which briefly describes the change that you are making. 2. A detailed description of the change. This should include information about the purpose of the change, i.e. why you are making the change in the first place. For example, if your patch addresses a bug, give a description of the bug and a way to reproduce it, if possible. If your patch adds a new feature, describe how it can be used and what it can be used for. Think about the impact that your change can have on the users. If your patch changes the behavior in a user-visible way, you should mention it and justify why the change should be made anyway. You should also describe any non-trivial design decisions that were made in making of the patch. Write down any gotchas that could be useful for future readers of the code, any hints that could be useful to determine why the change was made in a particular way. You can also provide links, for example links to any documentation that could be useful for reviewers of the patch, or links to discussions about a bug that your patch addresses. If your patch resolves a bug in the Red Hat Bugzilla, you can link to it using the following tag: `Resolves: rhbz#1592743` If your patch addresses an issue in the GitHub repository, you can use the following notation: `Fixes #95` 3. Your sign-off. Every commit needs to have a `Signed-off-by` tag at the end of the commit message, indicating that the contributor of the patch agrees with the [Developer Certificate of Origin](/DCO). The tag should have the following format and it must include the real name and email address of the contributor: `Signed-off-by: John Doe <jdoe@somewhere.com>` If you use `git commit -s`, `git` will add the tag for you. Every patch should represent a single logical change. On the one hand, each patch should be complete enough so that after applying it, the TuneD repository remains in a consistent state and TuneD remains, to the best of the contributor's knowledge, completely functional (a corollary of this is that when making fixes to your pull request on GitHub, you should include the fixes in the commits where they logically belong rather than appending new commits). On the other hand, a patch should not make multiple changes which could be separated into individual ones. Patches can either be submitted in the form of pull requests to the GitHub repository, sent to the power-management (at) lists.fedoraproject.org mailing list, or sent directly to the maintainers of the TuneD project. These guidelines were inspired by the [contribution guidelines of the Linux Kernel](https://www.kernel.org/doc/html/latest/process/submitting-patches.html). You can find more rationale for TuneD's guidelines in that document. 0707010000000B000081A40000000000000000000000016391BC3A000046AC000000000000000000000000000000000000002400000000tuned-2.19.0.29+git.b894a3e/COPYING GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. <signature of Ty Coon>, 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. 0707010000000C000081A40000000000000000000000016391BC3A0000058D000000000000000000000000000000000000002000000000tuned-2.19.0.29+git.b894a3e/DCODeveloper Certificate of Origin Version 1.1 Copyright (C) 2004, 2006 The Linux Foundation and its contributors. 1 Letterman Drive Suite D4700 San Francisco, CA, 94129 Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. 0707010000000D000081A40000000000000000000000016391BC3A00000204000000000000000000000000000000000000002400000000tuned-2.19.0.29+git.b894a3e/INSTALLInstallation instructions ************************* The tuned daemon is written in pure Python. Nothing requires to be built. For installation use 'make install'. Optionally DESTDIR can be appended. By default, the tuned modules are installed to the Python3 destination (e.g. /usr/lib/python3.6/site-packages/) and shebangs in executable Python files are modified to use Python3. If you want tuned to use Python2 instead, set PYTHON to the full path of Python2. Example: make PYTHON=/usr/bin/python2 install 0707010000000E000081A40000000000000000000000016391BC3A0000239C000000000000000000000000000000000000002500000000tuned-2.19.0.29+git.b894a3e/MakefileNAME = tuned # set to devel for nightly GIT snapshot BUILD = release # which config to use in mock-build target MOCK_CONFIG = rhel-7-x86_64 # scratch-build for triggering Jenkins SCRATCH_BUILD_TARGET = rhel-7.5-candidate VERSION = $(shell awk '/^Version:/ {print $$2}' tuned.spec) PRERELEASENUM = $(shell awk '/^\s*%global prereleasenum/ {print $$3}' tuned.spec) ifneq ($(strip $(PRERELEASENUM)),) PRERELEASE = -rc.$(PRERELEASENUM) endif GIT_DATE = $(shell date +'%Y%m%d') ifeq ($(BUILD), release) RPM_ARGS += --without snapshot MOCK_ARGS += --without=snapshot RPM_VERSION = $(NAME)-$(VERSION)-1 else RPM_ARGS += --with snapshot MOCK_ARGS += --with=snapshot GIT_SHORT_COMMIT = $(shell git rev-parse --short=8 --verify HEAD) GIT_SUFFIX = $(GIT_DATE)git$(GIT_SHORT_COMMIT) GIT_PSUFFIX = .$(GIT_SUFFIX) RPM_VERSION = $(NAME)-$(VERSION)-1$(GIT_PSUFFIX) endif UNITDIR_FALLBACK = /usr/lib/systemd/system UNITDIR_DETECT = $(shell pkg-config systemd --variable systemdsystemunitdir || rpm --eval '%{_unitdir}' 2>/dev/null || echo $(UNITDIR_FALLBACK)) UNITDIR = $(UNITDIR_DETECT:%{_unitdir}=$(UNITDIR_FALLBACK)) TMPFILESDIR_FALLBACK = /usr/lib/tmpfiles.d TMPFILESDIR_DETECT = $(shell pkg-config systemd --variable tmpfilesdir || rpm --eval '%{_tmpfilesdir}' 2>/dev/null || echo $(TMPFILESDIR_FALLBACK)) TMPFILESDIR = $(TMPFILESDIR_DETECT:%{_tmpfilesdir}=$(TMPFILESDIR_FALLBACK)) VERSIONED_NAME = $(NAME)-$(VERSION)$(PRERELEASE)$(GIT_PSUFFIX) export SYSCONFDIR = /etc export DATADIR = /usr/share export DOCDIR = $(DATADIR)/doc/$(NAME) PYTHON = /usr/bin/python3 PYLINT = pylint-3 ifeq ($(PYTHON),python2) PYLINT = pylint-2 endif SHEBANG_REWRITE_REGEX= '1s|^\#!/usr/bin/\<python3\>|\#!$(PYTHON)|' PYTHON_SITELIB = $(shell $(PYTHON) -c 'from distutils.sysconfig import get_python_lib; print(get_python_lib());') ifeq ($(PYTHON_SITELIB),) $(error Failed to determine python library directory) endif KERNELINSTALLHOOKDIR = /usr/lib/kernel/install.d TUNED_PROFILESDIR = /usr/lib/tuned TUNED_RECOMMEND_DIR = $(TUNED_PROFILESDIR)/recommend.d TUNED_USER_RECOMMEND_DIR = $(SYSCONFDIR)/tuned/recommend.d BASH_COMPLETIONS = $(DATADIR)/bash-completion/completions copy_executable = install -Dm 0755 $(1) $(2) rewrite_shebang = sed -i -r -e $(SHEBANG_REWRITE_REGEX) $(1) restore_timestamp = touch -r $(1) $(2) install_python_script = $(call copy_executable,$(1),$(2)) \ && $(call rewrite_shebang,$(2)) && $(call restore_timestamp,$(1),$(2)); release-dir: mkdir -p $(VERSIONED_NAME) release-cp: release-dir cp -a AUTHORS COPYING INSTALL README $(VERSIONED_NAME) cp -a tuned.py tuned.spec tuned.service tuned.tmpfiles Makefile tuned-adm.py \ tuned-adm.bash dbus.conf recommend.conf tuned-main.conf 00_tuned \ 92-tuned.install bootcmdline modules.conf com.redhat.tuned.policy \ tuned-gui.py tuned-gui.glade \ tuned-gui.desktop $(VERSIONED_NAME) cp -a doc experiments libexec man profiles systemtap tuned contrib icons \ tests $(VERSIONED_NAME) archive: clean release-cp tar czf $(VERSIONED_NAME).tar.gz $(VERSIONED_NAME) rpm-build-dir: mkdir rpm-build-dir srpm: archive rpm-build-dir rpmbuild --define "_sourcedir `pwd`/rpm-build-dir" --define "_srcrpmdir `pwd`/rpm-build-dir" \ --define "_specdir `pwd`/rpm-build-dir" --nodeps $(RPM_ARGS) -ts $(VERSIONED_NAME).tar.gz rpm: archive rpm-build-dir rpmbuild --define "_sourcedir `pwd`/rpm-build-dir" --define "_srcrpmdir `pwd`/rpm-build-dir" \ --define "_specdir `pwd`/rpm-build-dir" --nodeps $(RPM_ARGS) -tb $(VERSIONED_NAME).tar.gz clean-mock-result-dir: rm -f mock-result-dir/* mock-result-dir: mkdir mock-result-dir # delete RPM files older than cca. one week if total space occupied is more than 5 MB tidy-mock-result-dir: mock-result-dir if [ `du -bs mock-result-dir | tail -n 1 | cut -f1` -gt 5000000 ]; then \ rm -f `find mock-result-dir -name '*.rpm' -mtime +7`; \ fi mock-build: srpm mock -r $(MOCK_CONFIG) $(MOCK_ARGS) --resultdir=`pwd`/mock-result-dir `ls rpm-build-dir/*$(RPM_VERSION).*.src.rpm | head -n 1`&& \ rm -f mock-result-dir/*.log mock-devel-build: srpm mock -r $(MOCK_CONFIG) --with=snapshot \ --define "git_short_commit `if [ -n \"$(GIT_SHORT_COMMIT)\" ]; then echo $(GIT_SHORT_COMMIT); else git rev-parse --short=8 --verify HEAD; fi`" \ --resultdir=`pwd`/mock-result-dir `ls rpm-build-dir/*$(RPM_VERSION).*.src.rpm | head -n 1` && \ rm -f mock-result-dir/*.log createrepo: mock-devel-build createrepo mock-result-dir # scratch build to triggering Jenkins scratch-build: mock-devel-build brew build --scratch --nowait $(SCRATCH_BUILD_TARGET) `ls mock-result-dir/*$(GIT_DATE)git*.*.src.rpm | head -n 1` nightly: tidy-mock-result-dir createrepo scratch-build rsync -ave ssh --delete --progress mock-result-dir/ jskarvad@fedorapeople.org:/home/fedora/jskarvad/public_html/tuned/devel/repo/ html: $(MAKE) -C doc/manual install-html: $(MAKE) -C doc/manual install clean-html: $(MAKE) -C doc/manual clean install-dirs: mkdir -p $(DESTDIR)$(PYTHON_SITELIB) mkdir -p $(DESTDIR)$(TUNED_PROFILESDIR) mkdir -p $(DESTDIR)/var/lib/tuned mkdir -p $(DESTDIR)/var/log/tuned mkdir -p $(DESTDIR)/run/tuned mkdir -p $(DESTDIR)$(DOCDIR) mkdir -p $(DESTDIR)$(SYSCONFDIR) mkdir -p $(DESTDIR)$(TUNED_RECOMMEND_DIR) mkdir -p $(DESTDIR)$(TUNED_USER_RECOMMEND_DIR) install: install-dirs # library cp -a tuned $(DESTDIR)$(PYTHON_SITELIB) # binaries $(call install_python_script,tuned.py,$(DESTDIR)/usr/sbin/tuned) $(call install_python_script,tuned-adm.py,$(DESTDIR)/usr/sbin/tuned-adm) $(call install_python_script,tuned-gui.py,$(DESTDIR)/usr/sbin/tuned-gui) $(foreach file, diskdevstat netdevstat scomes, \ install -Dpm 0755 systemtap/$(file) $(DESTDIR)/usr/sbin/$(notdir $(file));) $(call install_python_script, \ systemtap/varnetload, $(DESTDIR)/usr/sbin/varnetload) # glade install -Dpm 0644 tuned-gui.glade $(DESTDIR)$(DATADIR)/tuned/ui/tuned-gui.glade # tools $(call install_python_script, \ experiments/powertop2tuned.py, $(DESTDIR)/usr/bin/powertop2tuned) # configuration files install -Dpm 0644 tuned-main.conf $(DESTDIR)$(SYSCONFDIR)/tuned/tuned-main.conf # None profile in the moment, autodetection will be used echo -n > $(DESTDIR)$(SYSCONFDIR)/tuned/active_profile echo -n > $(DESTDIR)$(SYSCONFDIR)/tuned/profile_mode echo -n > $(DESTDIR)$(SYSCONFDIR)/tuned/post_loaded_profile install -Dpm 0644 bootcmdline $(DESTDIR)$(SYSCONFDIR)/tuned/bootcmdline install -Dpm 0644 modules.conf $(DESTDIR)$(SYSCONFDIR)/modprobe.d/tuned.conf # profiles & system config cp -a profiles/* $(DESTDIR)$(TUNED_PROFILESDIR)/ mv $(DESTDIR)$(TUNED_PROFILESDIR)/realtime/realtime-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/realtime-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/realtime-virtual-guest/realtime-virtual-guest-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/realtime-virtual-guest-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/realtime-virtual-host/realtime-virtual-host-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/realtime-virtual-host-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/cpu-partitioning/cpu-partitioning-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/cpu-partitioning-variables.conf mv $(DESTDIR)$(TUNED_PROFILESDIR)/cpu-partitioning-powersave/cpu-partitioning-powersave-variables.conf \ $(DESTDIR)$(SYSCONFDIR)/tuned/cpu-partitioning-powersave-variables.conf install -pm 0644 recommend.conf $(DESTDIR)$(TUNED_RECOMMEND_DIR)/50-tuned.conf # bash completion install -Dpm 0644 tuned-adm.bash $(DESTDIR)$(BASH_COMPLETIONS)/tuned-adm # runtime directory install -Dpm 0644 tuned.tmpfiles $(DESTDIR)$(TMPFILESDIR)/tuned.conf # systemd units install -Dpm 0644 tuned.service $(DESTDIR)$(UNITDIR)/tuned.service # dbus configuration install -Dpm 0644 dbus.conf $(DESTDIR)$(SYSCONFDIR)/dbus-1/system.d/com.redhat.tuned.conf # grub template install -Dpm 0755 00_tuned $(DESTDIR)$(SYSCONFDIR)/grub.d/00_tuned # kernel install hook install -Dpm 0755 92-tuned.install $(DESTDIR)$(KERNELINSTALLHOOKDIR)/92-tuned.install # polkit configuration install -Dpm 0644 com.redhat.tuned.policy $(DESTDIR)$(DATADIR)/polkit-1/actions/com.redhat.tuned.policy # manual pages $(foreach man_section, 5 7 8, $(foreach file, $(wildcard man/*.$(man_section)), \ install -Dpm 0644 $(file) $(DESTDIR)$(DATADIR)/man/man$(man_section)/$(notdir $(file));)) # documentation cp -a doc/README* $(DESTDIR)$(DOCDIR) cp -a doc/*.txt $(DESTDIR)$(DOCDIR) cp AUTHORS COPYING README $(DESTDIR)$(DOCDIR) # libexec scripts $(foreach file, $(wildcard libexec/*), \ $(call install_python_script, \ $(file), $(DESTDIR)/usr/libexec/tuned/$(notdir $(file)))) # icon install -Dpm 0644 icons/tuned.svg $(DESTDIR)$(DATADIR)/icons/hicolor/scalable/apps/tuned.svg # desktop file install -dD $(DESTDIR)$(DATADIR)/applications desktop-file-install --dir=$(DESTDIR)$(DATADIR)/applications tuned-gui.desktop clean: clean-html find -name "*.pyc" | xargs rm -f rm -rf $(VERSIONED_NAME) rpm-build-dir test: $(PYTHON) -B -m unittest discover tests/unit lint: $(PYLINT) -E -f parseable tuned *.py tests/unit .PHONY: clean archive srpm tag test lint 0707010000000F000081A40000000000000000000000016391BC3A00001061000000000000000000000000000000000000002300000000tuned-2.19.0.29+git.b894a3e/READMETuneD: Daemon for monitoring and adaptive tuning of system devices. (This is tuned 2.0 with a new code base. If you are looking for the older version, please check out branch '1.0' in our Git repository.) How to use it ------------- TuneD is incompatible with the cpupower and power-profiles-daemon. If you have these services, uninstall or disable them. In Fedora, Red Hat Enterprise Linux, and their derivates install tuned package (optionally tuned-utils, tuned-utils-systemtap, and tuned-profiles-compat): # dnf install tuned After the installation, start the tuned service: # systemctl start tuned You might also want to run tuned whenever your machine starts: # systemctl enable tuned If the daemon is running you can easily control it using 'tuned-adm' command line utility. This tool communicates with the daemon over DBus. Any user can list the available profiles and see which one is active. But the profiles can be switched only by root user or by any user with physical console allocated on the machine (X11, physical tty, but no SSH). To see the current active profile, run: # tuned-adm active To list all available profiles, run: # tuned-adm list To switch to a different profile, run: # tuned-adm profile <profile-name> Your profile choice is also written into /etc/tuned/active_profile and this choice is used when the daemon is restarted (e.g. with the machine reboot). To disable all tunings, run: # tuned-adm off To show information/description of given profile or current profile if no profile is specified, run: # tuned-adm profile_info To verify current profile against system settings, run: # tuned-adm verify To enable automatic profile selection mode, switch to the recommended profile, run: # tuned-adm auto_profile To show current profile selection mode, run: # tuned-adm profile_mode # tuned-adm recommend Recommend profile suitable for your system. Currently only static detection is implemented - it decides according to data in /etc/system-release-cpe and virt-what output. The rules for autodetection are defined in the file /usr/lib/tuned/recommend.d/50-tuned.conf. They can be overridden by the user by putting a file to /etc/tuned/recommend.d or a file named recommend.conf into /etc/tuned (see tuned-adm(8) for more details). The default rules recommend profiles targeted to the best performance or the balanced profile if unsure. Available tunings ----------------- We are currently working on many new tuning features. Some are described in the manual pages, some are yet undocumented. Authors ------- The best way to contact the authors of the project is to use our mailing list: power-management@lists.fedoraproject.org In case you want to contact individual author, you will find the e-mail address in every commit message in our Git repository: https://github.com/redhat-performance/tuned.git You can also join #fedora-power IRC channel on Freenode. Web page: https://tuned-project.org/ Contributing ------------ See the file CONTRIBUTING.md for guidelines for contributing. License ------- Copyright (C) 2008-2021 Red Hat, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Full text of the license is enclosed in COPYING file. The Developer Certificate of Origin, distributed in the file 'DCO' is licensed differently, see the file for the text of the license. The icon: The TuneD icon was created by Mariia Leonova <mleonova@redhat.com> and it is licensed under Creative Commons Attribution-ShareAlike 3.0 license (http://creativecommons.org/licenses/by-sa/3.0/legalcode). 07070100000010000081A40000000000000000000000016391BC3A00000220000000000000000000000000000000000000002100000000tuned-2.19.0.29+git.b894a3e/TODO* Implement a configurable policy which determines how many (and how big) log buffers can a user with a given UID create using the log_capture_start DBus call. * Destroy the log handler created by log_capture_start() when the caller disconnects from DBus. * Use only one timer for destroying log handlers at a time. Create a new timer as necessary when the old timer fires. * Handle signals in tuned-adm so that the log_capture_finish() DBus call is called even if we receive a signal. This may require some rewriting of tuned-adm. 07070100000011000081A40000000000000000000000016391BC3A00000457000000000000000000000000000000000000002800000000tuned-2.19.0.29+git.b894a3e/bootcmdline# This file specifies additional parameters to kernel boot command line and # initrd overlay images. Its content is set by the TuneD bootloader plugin # and sourced by the grub2-mkconfig (/etc/grub.d/00_tuned script). # # Please do not edit this file. Content of this file can be overwritten by # switch of TuneD profile. # # If you need to add parameters to the kernel boot command line, create # TuneD profile containing the following: # # [bootloader] # cmdline = YOUR_ADDITIONAL_KERNEL_PARAMETERS # # Then switch to your newly created profile by: # # tuned-adm profile YOUR_NEW_PROFILE # # Your current grub2 config will be patched, but you can also # regenerate it anytime by: # # grub2-mkconfig -o /boot/grub2/grub.cfg # # YOUR_ADDITIONAL_KERNEL_PARAMETERS will stay preserved. # # Similarly if you need to add initrd overlay image, create TuneD profile # containing the following: # # [bootloader] # initrd_add_img = INITRD_OVERLAY_IMAGE # # or to generate initrd overlay image from the directory: # # [bootloader] # initrd_add_dir = INITRD_OVERLAY_DIRECTORY TUNED_BOOT_CMDLINE= TUNED_BOOT_INITRD_ADD= 07070100000012000081A40000000000000000000000016391BC3A0000002D000000000000000000000000000000000000002300000000tuned-2.19.0.29+git.b894a3e/ci.fmfdiscover: how: fmf execute: how: tmt 07070100000013000081A40000000000000000000000016391BC3A00001E49000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/com.redhat.tuned.policy<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE policyconfig PUBLIC "-//freedesktop//DTD polkit Policy Configuration 1.0//EN" "http://www.freedesktop.org/software/polkit/policyconfig-1.dtd"> <policyconfig> <vendor>TuneD</vendor> <vendor_url>https://fedorahosted.org/tuned/</vendor_url> <icon_name>tuned</icon_name> <action id="com.redhat.tuned.active_profile"> <description>Show active profile</description> <message>Authentication is required to show active profile</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.profile_mode"> <description>Show current profile selection mode</description> <message>Authentication is required to show current profile selection mode</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.post_loaded_profile"> <description>Show active post-loaded profile</description> <message>Authentication is required to show active post-loaded profile</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.disable"> <description>Disable TuneD</description> <message>Authentication is required to disable TuneD</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.is_running"> <description>Check whether TuneD is running</description> <message>Authentication is required to check whether TuneD is running</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.profile_info"> <description>Show information about TuneD profile</description> <message>Authentication is required to show information about TuneD profile</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.profiles"> <description>List TuneD profiles</description> <message>Authentication is required to list TuneD profiles</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.profiles2"> <description>List TuneD profiles</description> <message>Authentication is required to list TuneD profiles</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.recommend_profile"> <description>Show TuneD profile name which is recommended for your system</description> <message>Authentication is required to show recommended profile name</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.reload"> <description>Reload TuneD configuration</description> <message>Authentication is required to reload TuneD configuration</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.start"> <description>Start TuneD daemon</description> <message>Authentication is required to start TuneD daemon</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.stop"> <description>Stop TuneD daemon</description> <message>Authentication is required to stop TuneD daemon</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.switch_profile"> <description>Switch TuneD profile</description> <message>Authentication is required to switch TuneD profile</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.log_capture_start"> <description>Start log capture</description> <message>Authentication is required to start log capture</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.log_capture_finish"> <description>Stop log capture and return captured log</description> <message>Authentication is required to stop log capture</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.auto_profile"> <description>Enable automatic profile selection mode</description> <message>Authentication is required to change profile selection mode</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.verify_profile"> <description>Verify TuneD profile</description> <message>Authentication is required to verify TuneD profile</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.verify_profile_ignore_missing"> <description>Verify TuneD profile, ignore missing values</description> <message>Authentication is required to verify TuneD profile</message> <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.get_all_plugins"> <description>Get plugins which TuneD daemon can acces</description> <message>Authentication is required to get TuneD plugins</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.get_plugin_documentation"> <description>Get documentation of TuneD plugin</description> <message>Authentication is required to get documentation of TuneD plugin</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> <action id="com.redhat.tuned.get_plugin_hints"> <description>Get hints for parameters of TuneD plugin</description> <message>Authentication is required to get hints for parameters of TuneD plugin</message> <defaults> <allow_any>auth_admin</allow_any> <allow_inactive>auth_admin</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> </policyconfig> 07070100000014000041ED0000000000000000000000046391BC3A00000000000000000000000000000000000000000000002400000000tuned-2.19.0.29+git.b894a3e/contrib07070100000015000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/contrib/sysvinit07070100000016000081ED0000000000000000000000016391BC3A00000880000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/contrib/sysvinit/tuned#!/bin/sh ### BEGIN INIT INFO # Provides: tuned # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Should-Start: $portmap # Should-Stop: $portmap # X-Start-Before: nis # X-Stop-After: nis # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # X-Interactive: false # Short-Description: TuneD daemon # Description: Dynamic System Tuning Daemon ### END INIT INFO # PATH should only include /usr/* if it runs after the mountnfs.sh script PATH=/sbin:/usr/sbin:/bin:/usr/bin DESC="System tuning daemon" NAME=tuned DAEMON=/usr/sbin/tuned PIDFILE=/var/run/tuned.pid TUNED_OPT1=--log TUNED_OPT2=--pid TUNED_OPT3=--daemon SCRIPTNAME=/etc/rc.d/init.d/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r /etc/default/$NAME ] && . /etc/default/$NAME # Define LSB log_* functions. . /lib/lsb/init-functions do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # other if daemon could not be started or a failure occured start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- $TUNED_OPT1 $TUNED_OPT2 $TUNED_OPT3 } do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # other if daemon could not be stopped or a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE --exec $DAEMON } case "$1" in start) if init_is_upstart; then exit 1 fi log_daemon_msg "Starting $DESC" "$TUNED" do_start case "$?" in 0) log_end_msg 0 ;; 1) log_progress_msg "already started" log_end_msg 0 ;; *) log_end_msg 1 ;; esac ;; stop) if init_is_upstart; then exit 0 fi log_daemon_msg "Stopping $DESC" "$TUNED" do_stop case "$?" in 0) log_end_msg 0 ;; 1) log_progress_msg "already stopped" log_end_msg 0 ;; *) log_end_msg 1 ;; esac ;; restart) if init_is_upstart; then exit 1 fi $0 stop $0 start ;; status) status_of_proc -p $PIDFILE $DAEMON $TUNED && exit 0 || exit $? ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|status}" >&2 exit 3 ;; esac : 07070100000017000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/contrib/upstart07070100000018000081A40000000000000000000000016391BC3A000000AE000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/contrib/upstart/tuned.confdescription "Dynamic system tuning daemon" start on runlevel [2345] stop on runlevel [!2345] exec tuned --log --pid --daemon post-stop exec rm -f /var/run/tuned/tuned.pid 07070100000019000081A40000000000000000000000016391BC3A0000024B000000000000000000000000000000000000002600000000tuned-2.19.0.29+git.b894a3e/dbus.conf<?xml version="1.0"?> <!DOCTYPE busconfig PUBLIC "-//freedesktop//DTD D-BUS Bus Configuration 1.0//EN" "http://www.freedesktop.org/standards/dbus/1.0/busconfig.dtd"> <busconfig> <policy context="default"> <allow receive_sender="com.redhat.tuned" /> <allow send_destination="com.redhat.tuned" send_interface="org.freedesktop.DBus.Introspectable" /> <allow send_destination="com.redhat.tuned" send_interface="com.redhat.tuned.control" /> </policy> <policy user="root"> <allow own="com.redhat.tuned" /> <allow send_destination="com.redhat.tuned" /> </policy> </busconfig> 0707010000001A000041ED0000000000000000000000036391BC3A00000000000000000000000000000000000000000000002000000000tuned-2.19.0.29+git.b894a3e/doc0707010000001B000081A40000000000000000000000016391BC3A00000117000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/doc/README.NFVThe package tuned-profiles-nfv is kept for backward compatibility and can be dropped in the future. It has no useful content, it only brings in tuned-profiles-nfv-guest and tuned-profiles-nfv-host dependencies. You can install them by hand and remove tuned-profiles-nfv package. 0707010000001C000081A40000000000000000000000016391BC3A000008D8000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/doc/README.scomes Author: Jan Hutar <jhutar@redhat.com> Prepare your system: # yum install systemtap # debuginfo-install kernel Usage scomes Binary you want to measure must be named uniquely (or ensure there are no other binaries with same name running on the system). Now run the scomes with the command-line option being name of the binary and then run the binary: # scomes -c "<binary> [<binary args> ...] <timer> <binary> [<binary arg> ...] - measured program <timer> how often you want to see current results, value is in seconds and 0 means "show only last results" scomes will start to output statistics each <timer> seconds and once binary ends, it will output final statistic like this: Monitored execname: my_binary_3d4f8 Number of syscalls: 0 Kernel/Userspace ticks: 0/0 Read/Written bytes: 0 Transmitted/Recived bytes: 0 Polling syscalls: 0 SCORE: 0 ----------------------------------- Monitored execname: my_binary_3d4f8 Number of syscalls: 3716 Kernel/Userspace ticks: 34/339 Read/Written bytes: 446282 Transmitted/Recived bytes: 16235 Polling syscalls: 2 SCORE: 4479767 ----------------------------------- LAST RESULTS: ----------------------------------- Monitored execname: my_binary_3d4f8 Number of syscalls: 4529 Kernel/Userspace ticks: 44/446 Read/Written bytes: 454352 Transmitted/Recived bytes: 22003 Polling syscalls: 3 SCORE: 4566459 ----------------------------------- QUITTING ----------------------------------- Note: on F11 please call scomes with stap --skip-badvars scomes.stp. Explain statistics Monitored execname name of the binary (passed as a command-line argument) Number of syscalls number of all syscalls performed by the binary Kernel/Userspace ticks count of the processor ticks binary uses in the kernel or in userspace respectively (kticks and uticks variables) Read/Written bytes sum of the read and written bytes from the file binary does (readwrites variable) Transmitted/Recived bytes sum of the read and written bytes from the network binary does (ifxmit and ifrecv variables) Polling syscalls "bad" pooling syscals binary does (poll, select, epoll, itimer, futex, nanosleep, signal) SCORE TODO - but for now: SCORE = kticks + 2*uticks + 10*readwrites + ifxmit + ifrecv 0707010000001D000081A40000000000000000000000016391BC3A00001CE1000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/doc/README.utilsSystemtap disk and network statistic monitoring tools ===================================================== The netdevstat and diskdevstat are 2 systemtap tools that allow the user to collect detailed information about network and disk activity of all applications running on a system. These 2 tools were inspired by powertop, which shows number of wakeups for every application per second. The basic idea is to collect statistic about the running applications in a form that allows a user to identify power greedy applications. That means f.e. instead of doing fewer and bigger IO operations they do more and smaller ones. Current monitoring tools show typically only the transfer speeds, which isn't very meaningful in that context. To run those tools, you need to have systemtap installed. Then you can simply type netdevstat or diskdevstat and the scripts will start. Both can take up to 3 parameters: diskdevstat [Update interval] [Total duration] [Display histogram at the end] netdevstat [Update interval] [Total duration] [Display histogram at the end] Update interval: Time in seconds between updates for the display. Default: 5 Total duration: Time in seconds for the whole run. Default: 86400 (1 day) Display histogram at the end: Flag if at the end of the execution a histogram for the whole collected data. The output will look similar to top and/or powertop. Here a sample output of a longer diskdevstat run on a Fedora 10 system running KDE 4.2: PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 15494 0 sda1 0 0.000 0.000 0.000 758 0.000 0.012 0.000 0logwatch 15520 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 15549 0 sda1 0 0.000 0.000 0.000 140 0.000 0.009 0.000 perl 15585 0 sda1 0 0.000 0.000 0.000 108 0.001 0.002 0.000 perl 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 15429 0 sda1 0 0.000 0.000 0.000 62 0.009 0.009 0.000 crond 15379 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15473 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15415 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15433 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15425 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 15375 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15477 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 15469 0 sda1 0 0.000 0.000 0.000 62 0.007 0.007 0.000 crond 15419 0 sda1 0 0.000 0.000 0.000 62 0.008 0.008 0.000 crond 15481 0 sda1 0 0.000 0.000 0.000 61 0.000 0.001 0.000 crond 15355 0 sda1 0 0.000 0.000 0.000 37 0.000 0.014 0.001 laptop_mode 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd 15575 0 sda1 0 0.000 0.000 0.000 16 0.000 0.000 0.000 cat 15581 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 15582 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 15579 0 sda1 0 0.000 0.000 0.000 12 0.000 0.001 0.000 perl 15580 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 15354 0 sda1 0 0.000 0.000 0.000 12 0.000 0.170 0.014 sh 15584 0 sda1 0 0.000 0.000 0.000 12 0.001 0.002 0.000 perl 15548 0 sda1 0 0.000 0.000 0.000 12 0.001 0.014 0.001 perl 15577 0 sda1 0 0.000 0.000 0.000 12 0.001 0.003 0.000 perl 15519 0 sda1 0 0.000 0.000 0.000 12 0.001 0.005 0.000 perl 15578 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 15583 0 sda1 0 0.000 0.000 0.000 12 0.001 0.001 0.000 perl 15547 0 sda1 0 0.000 0.000 0.000 11 0.000 0.002 0.000 perl 15576 0 sda1 0 0.000 0.000 0.000 11 0.001 0.001 0.000 perl 15518 0 sda1 0 0.000 0.000 0.000 11 0.000 0.001 0.000 perl 15354 0 sda1 0 0.000 0.000 0.000 10 0.053 0.053 0.005 lm_lid.sh Here a quick explanation of each column: PID: Process ID of the application UID: User ID under which the applications is running DEV: Device on which the IO took place WRITE_CNT: Total number of write operations WRITE_MIN: Lowest time in seconds for 2 consecutive writes WRITE_MAX: Largest time in seconds for 2 consecutive writes WRITE_AVG: Average time in seconds for 2 consecutive writes READ_CNT: Total number of read operations READ_MIN: Lowest time in seconds for 2 consecutive reads READ_MAX: Largest time in seconds for 2 consecutive reads READ_AVG: Average time in seconds for 2 consecutive reads COMMAND: Name of the process In this example 3 very obvious applications stand out: PID UID DEV WRITE_CNT WRITE_MIN WRITE_MAX WRITE_AVG READ_CNT READ_MIN READ_MAX READ_AVG COMMAND 2789 2903 sda1 854 0.000 120.000 39.836 0 0.000 0.000 0.000 plasma 2573 0 sda1 63 0.033 3600.015 515.226 0 0.000 0.000 0.000 auditd 2153 0 sda1 26 0.003 3600.029 1290.730 0 0.000 0.000 0.000 rsyslogd Those are the 3 applications that have a WRITE_CNT > 0, meaning they performed some form of write during the measurement. Of those, plasma was the worst offender by a large amount. For one in total number of writes and of course the average time between writes was also the lowest. This would be the best candidate to investigate if you're concerned about power inefficient applications. 0707010000001E000081A40000000000000000000000016391BC3A000008C0000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/doc/TIPS.txt=== Simple user tips for improving power usage === * Use a properly dimensioned system for the job (no need for overpowered systems for simple Desktop use e.g.). * For servers consolidate services on fewer systems to maximize efficiency of each system. * For a server farm consolidating all physical machines on one bigger server and then using Virtualization. * Enforce turning off machines that are not used (e.g. company policy). * Try to establish a company culture that is Green "aware", including but not limited to the above point. * Unplug and/or turn off peripherals that aren't used (e.g. external USB devices, monitors, printers, scanners). * Turn off unused hardware already in BIOS. * Disable power hungry features. * Enable CPU scaling if supported for ondemand CPU governor. DO NOT use powersave governor, typically uses more power than ondemand (race to idle). * Put network card to 100 mbit/10 mbit: ** 10 mbit: ethtool -s eth0 advertise 0x002 ** 100 mbit: ethtool -s eth0 advertise 0x008 ** Doesn't work for every card * Put harddisk to spindown fast and full power saving: ** hdparm -S240 /dev/sda (20m idle to spindown) ** hdparm -B1 /dev/sda (Max powersave mode) * Make sure writes to hd don't wake it up too quickly: ** Set flushing to once per 5 minutes ** echo "3000" > /proc/sys/vm/dirty_writeback_centisecs ** Enable laptop mode ** echo "5" > /proc/sys/vm/laptop_mode * Use relatime for your / partition ** mount -o remount,relatime / * Enable USB autosuspend by adding the following to the kernel boot commandline: ** usbcore.autosuspend=5 * Screensaver needs to dpms off the screen, not just make colors black. To turn off monitor after 120s when X is running: ** xset dpms 0 0 120 === Simple programmer tips for improving power usage === * Avoid unnecessary work/computation * Use efficient algorithms * Wake up only when necessary/real work is pending * Do not actively poll in programs or use short regular timeouts, rather react to events * If you wake up, do everything at once (race to idle) and as fast as possible * Use large buffers to avoid frequent disk access. Write one large block at a time * Don't use [f]sync() if not necessary * Group timers across applications if possible (even systems) 0707010000001F000041ED0000000000000000000000056391BC3A00000000000000000000000000000000000000000000002700000000tuned-2.19.0.29+git.b894a3e/doc/manual07070100000020000081A40000000000000000000000016391BC3A0000011C000000000000000000000000000000000000003000000000tuned-2.19.0.29+git.b894a3e/doc/manual/Makefile.PHONY: clean index.html: master.adoc assemblies/*.adoc meta/*.adoc modules/performance/*.adoc asciidoctor -o index.html master.adoc || asciidoc -o index.html master.adoc install: index.html install -Dpm 0644 index.html $(DESTDIR)$(DOCDIR)/manual/index.html clean: rm -f *.html 07070100000021000081A40000000000000000000000016391BC3A000006C5000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/doc/manual/README.adoc= About this documentation This directory contains source files of TuneD documentation intended for system administrators. == Building the source The documentation is written in the *AsciiDoc* markup language. To build it, install the link:https://asciidoctor.org/[asciidoctor] utility and use it to convert the master file: ---- $ asciidoctor doc/tuned-documentation/master.adoc ---- This generates the `master.html` file, which you can open with your web browser. == Structure The `master.adoc` file is the main entry point for the documentation. It _includes_ (or, imports, loads) _assembly_ files from the `assemblies/` directory, which represent user stories. These assembly files then include _modules_ located in `modules/performance/`. Modules are reusable sections of content representing a concept, a procedure, or a reference. == Naming conventions Use the following naming conventions when referring to TuneD and its components in the documentation: * "the *TuneD* application", referring to the complete set of software, including executables, profiles, scripts, documentation, artwork, etc. Written as `the \*TuneD* application` in AsciiDoc, because application names are in bold text. * "the TuneD project", referring to the developers and contributors, the web pages, repositories, planning, etc. * "the `tuned` service", referring to the `tuned.service` systemd unit and the `tuned` executable * "the `tuned-adm` utility", referring to the `tuned-adm` executable * "the `tuned` and `tuned-adm` commands", referring to the text typed into the terminal to run components of TuneD This is consistent with other naming schemes. For example, consider "the Firefox application" vs. "the `firefox` command". 07070100000022000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/doc/manual/assemblies07070100000023000081A40000000000000000000000016391BC3A0000083F000000000000000000000000000000000000005B00000000tuned-2.19.0.29+git.b894a3e/doc/manual/assemblies/assembly_customizing-tuned-profiles.adoc:parent-context-of-customizing-tuned-profiles: {context} [id='customizing-tuned-profiles_{context}'] = Customizing TuneD profiles :context: customizing-tuned-profiles [role="_abstract"] You can create or modify *TuneD* profiles to optimize system performance for your intended use case. .Prerequisites ifndef::pantheonenv[] * Install and enable *TuneD* as described in xref:installing-and-enabling-tuned_getting-started-with-tuned[Installing and Enabling Tuned] for details. endif::[] ifdef::pantheonenv[] * Install and enable *TuneD* as described in xref:modules/performance/proc_installing-and-enabling-tuned.adoc[Installing and Enabling Tuned] for details. endif::[] include::modules/performance/con_tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_the-default-tuned-profile.adoc[leveloffset=+1] include::modules/performance/con_merged-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_the-location-of-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_inheritance-between-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_static-and-dynamic-tuning-in-tuned.adoc[leveloffset=+1] include::modules/performance/con_tuned-plug-ins.adoc[leveloffset=+1] include::modules/performance/ref_available-tuned-plug-ins.adoc[leveloffset=+1] include::modules/performance/con_variables-in-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_built-in-functions-in-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/ref_built-in-functions-available-in-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/proc_creating-new-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/proc_modifying-existing-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/proc_setting-the-disk-scheduler-using-tuned.adoc[leveloffset=+1] ifdef::upstream[] [id='related-information-{context}'] == Related information * The `tuned.conf(5)` man page * The *TuneD* project website: link:https://tuned-project.org/[] endif::[] :context: {parent-context-of-customizing-tuned-profiles} 07070100000024000081A40000000000000000000000016391BC3A000007C8000000000000000000000000000000000000005B00000000tuned-2.19.0.29+git.b894a3e/doc/manual/assemblies/assembly_getting-started-with-tuned.adoc:parent-context-of-getting-started-with-tuned: {context} [id='getting-started-with-tuned_{context}'] = Getting started with TuneD :context: getting-started-with-tuned [role="_abstract"] As a system administrator, you can use the *TuneD* application to optimize the performance profile of your system for a variety of use cases. // .Prerequisites // // * A bulleted list of conditions that must be satisfied before the user starts following this assembly. // * You can also link to other modules or assemblies the user must follow before starting this assembly. // * Delete the section title and bullets if the assembly has no prerequisites. include::modules/performance/con_the-purpose-of-tuned.adoc[leveloffset=+1] include::modules/performance/con_tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_the-default-tuned-profile.adoc[leveloffset=+1] include::modules/performance/con_merged-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/con_the-location-of-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/ref_tuned-profiles-distributed-with-rhel.adoc[leveloffset=+1] include::modules/performance/ref_real-time-tuned-profiles-distributed-with-rhel.adoc[leveloffset=+1] include::modules/performance/con_static-and-dynamic-tuning-in-tuned.adoc[leveloffset=+1] include::modules/performance/con_tuned-no-daemon-mode.adoc[leveloffset=+1] include::modules/performance/proc_installing-and-enabling-tuned.adoc[leveloffset=+1] include::modules/performance/proc_listing-available-tuned-profiles.adoc[leveloffset=+1] include::modules/performance/proc_setting-a-tuned-profile.adoc[leveloffset=+1] include::modules/performance/proc_disabling-tuned.adoc[leveloffset=+1] ifdef::upstream[] [id='related-information-{context}'] == Related information * The `tuned(8)` man page * The `tuned-adm(8)` man page * The *TuneD* project website: link:https://tuned-project.org/[] endif::[] :context: {parent-context-of-getting-started-with-tuned} 070701000000250000A1FF000000000000000000000001620CDEAC0000000A000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/doc/manual/assemblies/modules../modules07070100000026000081A40000000000000000000000016391BC3A000005CA000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/doc/manual/master.adoc:revnumber: 2.10.0 :revdate: 2019-01-04 :keywords: documentation, tuned, performance, power, linux :toc: // The revnumber attribute is intended to show the TuneD version for which the document has been updated [id="{tuned-documentation}"] = TuneD documentation: Optimizing system throughput, latency, and power consumption // Load externally defined attributes include::meta/attributes.adoc[] // Set context for all included assemblies :context: tuned-documentation // This flag turns on internal, debug information in all included assemblies // :internal: // Abstract (Preamble): This documentation explains how to use the *TuneD* application to monitor and optimize the throughput, latency, and power consumption of your system in different scenarios. // The following is copied from the standard Red Hat legal notice // as used in all Red Hat documentation. // TODO: Figure out what the proper legal usage is. //// .Legal notice The text of and illustrations in this document are licensed under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at link:http://creativecommons.org/licenses/by-sa/3.0/[]. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version. //// include::assemblies/assembly_getting-started-with-tuned.adoc[leveloffset=+1] include::assemblies/assembly_customizing-tuned-profiles.adoc[leveloffset=+1] 07070100000027000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/doc/manual/meta07070100000028000081A40000000000000000000000016391BC3A000002B7000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/doc/manual/meta/attributes.adoc// This flag is used to compile an upstream, vendor-agnostic // book from the same source as the downstream RHEL documentation. // Modules can check if the flag is defined and enable different // content based on it. :upstream: :Year: 2020 // The following attribute is necessary for images to work :imagesdir: images // Enable additional AsciiDoc macros :experimental: // Red Hat and divisions :RH: Red{nbsp}Hat :CCS: Customer Content Services :OrgName: {RH} :OrgDiv: {CCS} // The product (RHEL) :ProductName: {RH} Enterprise{nbsp}Linux :RHEL: {ProductName} :ProductShortName: RHEL // This is the version displayed under "Red Hat Enterprise Linux" :ProductNumber: 8 :RHEL8: {RHEL}{nbsp}8 07070100000029000041ED0000000000000000000000036391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules0707010000002A000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance0707010000002B000081A40000000000000000000000016391BC3A00000531000000000000000000000000000000000000006900000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_built-in-functions-in-tuned-profiles.adoc:_module-type: CONCEPT [id="built-in-functions-in-tuned-profiles_{context}"] = Built-in functions in TuneD profiles [role="_abstract"] Built-in functions expand at run time when a *TuneD* profile is activated. You can: * Use various built-in functions together with *TuneD* variables * Create custom functions in Python and add them to *TuneD* in the form of plug-ins To call a function, use the following syntax: [subs=+quotes] ---- ${f:[replaceable]__function_name__:[replaceable]__argument_1__:[replaceable]__argument_2__} ---- To expand the directory path where the profile and the `tuned.conf` file are located, use the `PROFILE_DIR` function, which requires special syntax: ---- ${i:PROFILE_DIR} ---- .Isolating CPU cores using variables and built-in functions ==== In the following example, the `${non_isolated_cores}` variable expands to `0,3-5`, and the `cpulist_invert` built-in function is called with the `0,3-5` argument: ---- [variables] non_isolated_cores=0,3-5 [bootloader] cmdline=isolcpus=${f:cpulist_invert:${non_isolated_cores}} ---- The `cpulist_invert` function inverts the list of CPUs. For a 6-CPU machine, the inversion is `1,2`, and the kernel boots with the [option]`isolcpus=1,2` command-line option. ==== [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page 0707010000002C000081A40000000000000000000000016391BC3A0000054E000000000000000000000000000000000000006700000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_inheritance-between-tuned-profiles.adoc:_module-type: CONCEPT [id="inheritance-between-tuned-profiles_{context}"] = Inheritance between TuneD profiles [role="_abstract"] *TuneD* profiles can be based on other profiles and modify only certain aspects of their parent profile. The `[main]` section of *TuneD* profiles recognizes the [option]`include` option: [subs=+quotes] ---- [main] include=[replaceable]_parent_ ---- All settings from the [replaceable]_parent_ profile are loaded in this _child_ profile. In the following sections, the _child_ profile can override certain settings inherited from the [replaceable]_parent_ profile or add new settings not present in the [replaceable]_parent_ profile. You can create your own _child_ profile in the [filename]`/etc/tuned/` directory based on a pre-installed profile in [filename]`/usr/lib/tuned/` with only some parameters adjusted. If the [replaceable]_parent_ profile is updated, such as after a *TuneD* upgrade, the changes are reflected in the _child_ profile. .A power-saving profile based on balanced ==== The following is an example of a custom profile that extends the `balanced` profile and sets Aggressive Link Power Management (ALPM) for all devices to the maximum powersaving. ---- [main] include=balanced [scsi_host] alpm=min_power ---- ==== [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page 0707010000002D000081A40000000000000000000000016391BC3A0000045D000000000000000000000000000000000000005A00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_merged-tuned-profiles.adoc:_module-type: CONCEPT [id="merged-tuned-profiles_{context}"] = Merged TuneD profiles [role="_abstract"] As an experimental feature, it is possible to select more profiles at once. *TuneD* will try to merge them during the load. If there are conflicts, the settings from the last specified profile takes precedence. .Low power consumption in a virtual guest ==== The following example optimizes the system to run in a virtual machine for the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority: ---- # tuned-adm profile virtual-guest powersave ---- ==== WARNING: Merging is done automatically without checking whether the resulting combination of parameters makes sense. Consequently, the feature might tune some parameters the opposite way, which might be counterproductive: for example, setting the disk for high throughput by using the `throughput-performance` profile and concurrently setting the disk spindown to the low value by the `spindown-disk` profile. [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page. 0707010000002E000081A40000000000000000000000016391BC3A00000D19000000000000000000000000000000000000006700000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_static-and-dynamic-tuning-in-tuned.adoc:_module-type: CONCEPT [id="static-and-dynamic-tuning-in-tuned_{context}"] = Static and dynamic tuning in TuneD [role="_abstract"] This section explains the difference between the two categories of system tuning that *TuneD* applies: _static_ and _dynamic_. // TODO: Move some of this content into a separate module ("Enabling dynamic tuning"). It seems to be necessary to (1) enable dynamic tuning globally *and* (2) manually enable it in performance-oriented profiles. Static tuning:: Mainly consists of the application of predefined `sysctl` and `sysfs` settings and one-shot activation of several configuration tools such as `ethtool`. Dynamic tuning:: Watches how various system components are used throughout the uptime of your system. *TuneD* adjusts system settings dynamically based on that monitoring information. + For example, the hard drive is used heavily during startup and login, but is barely used later when the user might mainly work with applications such as web browsers or email clients. Similarly, the CPU and network devices are used differently at different times. *TuneD* monitors the activity of these components and reacts to the changes in their use. + By default, dynamic tuning is disabled. To enable it, edit the [filename]`/etc/tuned/tuned-main.conf` file and change the [option]`dynamic_tuning` option to `1`. *TuneD* then periodically analyzes system statistics and uses them to update your system tuning settings. To configure the time interval in seconds between these updates, use the [option]`update_interval` option. + Currently implemented dynamic tuning algorithms try to balance the performance and powersave, and are therefore disabled in the performance profiles. Dynamic tuning for individual plug-ins can be enabled or disabled in the *TuneD* profiles. // Internal note: Dynamic tuning is still disabled as of RHEL 8.0 Beta. .Static and dynamic tuning on a workstation ==== On a typical office workstation, the Ethernet network interface is inactive most of the time. Only a few emails go in and out or some web pages might be loaded. For those kinds of loads, the network interface does not have to run at full speed all the time, as it does by default. *TuneD* has a monitoring and tuning plug-in for network devices that can detect this low activity and then automatically lower the speed of that interface, typically resulting in a lower power usage. If the activity on the interface increases for a longer period of time, for example because a DVD image is being downloaded or an email with a large attachment is opened, *TuneD* detects this and sets the interface speed to maximum to offer the best performance while the activity level is high. This principle is used for other plug-ins for CPU and disks as well. ==== // .Additional resources // // * A bulleted list of links to other material closely related to the contents of the concept module. // * For more details on writing concept modules, see the link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. // * Use a consistent system for file names, IDs, and titles. For tips, see _Anchor Names and File Names_ in link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. 0707010000002F000081A40000000000000000000000016391BC3A00000306000000000000000000000000000000000000005E00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_the-default-tuned-profile.adoc:_module-type: CONCEPT [id="the-default-tuned-profile_{context}"] = The default TuneD profile [role="_abstract"] During the installation, the best profile for your system is selected automatically. Currently, the default profile is selected according to the following customizable rules: [options="header",cols="2,2,3"] |=== | Environment | Default profile | Goal | Compute nodes | `throughput-performance` | The best throughput performance | Virtual machines | `virtual-guest` | The best performance. If you are not interested in the best performance, you can change it to the `balanced` or `powersave` profile. | Other cases | `balanced` | Balanced performance and power consumption |=== [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page. 07070100000030000081A40000000000000000000000016391BC3A00000302000000000000000000000000000000000000006300000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_the-location-of-tuned-profiles.adoc:_module-type: CONCEPT [id="the-location-of-tuned-profiles_{context}"] = The location of TuneD profiles [role="_abstract"] *TuneD* stores profiles in the following directories: [filename]`/usr/lib/tuned/`:: Distribution-specific profiles are stored in the directory. Each profile has its own directory. The profile consists of the main configuration file called `tuned.conf`, and optionally other files, for example helper scripts. [filename]`/etc/tuned/`:: If you need to customize a profile, copy the profile directory into the directory, which is used for custom profiles. If there are two profiles of the same name, the custom profile located in [filename]`/etc/tuned/` is used. [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page. 07070100000031000081A40000000000000000000000016391BC3A00000623000000000000000000000000000000000000005900000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_the-purpose-of-tuned.adoc:_module-type: CONCEPT [id="the-purpose-of-tuned_{context}"] = The purpose of TuneD [role="_abstract"] *TuneD* is a service that monitors your system and optimizes the performance under certain workloads. The core of *TuneD* are _profiles_, which tune your system for different use cases. *TuneD* is distributed with a number of predefined profiles for use cases such as: * High throughput * Low latency * Saving power It is possible to modify the rules defined for each profile and customize how to tune a particular device. When you switch to another profile or deactivate *TuneD*, all changes made to the system settings by the previous profile revert back to their original state. You can also configure *TuneD* to react to changes in device usage and adjusts settings to improve performance of active devices and reduce power consumption of inactive devices. // The TuneD tuning service can adapt the operating system to perform better under certain workloads by setting a tuning profile. // .Additional resources // // * A bulleted list of links to other material closely related to the contents of the concept module. // * For more details on writing concept modules, see the link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. // * Use a consistent system for file names, IDs, and titles. For tips, see _Anchor Names and File Names_ in link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. 07070100000032000081A40000000000000000000000016391BC3A000004B9000000000000000000000000000000000000005900000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_tuned-no-daemon-mode.adoc:_module-type: CONCEPT [id="tuned-no-daemon-mode_{context}"] = TuneD no-daemon mode // TODO: Should this be a procedure? A user story? Is there a common use case? [role="_abstract"] You can run *TuneD* in `no-daemon` mode, which does not require any resident memory. In this mode, *TuneD* applies the settings and exits. By default, `no-daemon` mode is disabled because a lot of *TuneD* functionality is missing in this mode, including: * D-Bus support * Hot-plug support * Rollback support for settings To enable `no-daemon` mode, include the following line in the [filename]`/etc/tuned/tuned-main.conf` file: ---- daemon = 0 ---- // .Additional resources // // * A bulleted list of links to other material closely related to the contents of the concept module. // * For more details on writing concept modules, see the link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. // * Use a consistent system for file names, IDs, and titles. For tips, see _Anchor Names and File Names_ in link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. 07070100000033000081A40000000000000000000000016391BC3A00001030000000000000000000000000000000000000005300000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_tuned-plug-ins.adoc:_module-type: CONCEPT [id="tuned-plug-ins_{context}"] = TuneD plug-ins [role="_abstract"] Plug-ins are modules in *TuneD* profiles that *TuneD* uses to monitor or optimize different devices on the system. *TuneD* uses two types of plug-ins: Monitoring plug-ins:: Monitoring plug-ins are used to get information from a running system. The output of the monitoring plug-ins can be used by tuning plug-ins for dynamic tuning. + Monitoring plug-ins are automatically instantiated whenever their metrics are needed by any of the enabled tuning plug-ins. If two tuning plug-ins require the same data, only one instance of the monitoring plug-in is created and the data is shared. Tuning plug-ins:: Each tuning plug-in tunes an individual subsystem and takes several parameters that are populated from the TuneD profiles. Each subsystem can have multiple devices, such as multiple CPUs or network cards, that are handled by individual instances of the tuning plug-ins. Specific settings for individual devices are also supported. [discrete] == Syntax for plug-ins in TuneD profiles Sections describing plug-in instances are formatted in the following way: [subs=quotes] ---- [_NAME_] type=_TYPE_ devices=_DEVICES_ ---- NAME:: is the name of the plug-in instance as it is used in the logs. It can be an arbitrary string. TYPE:: is the type of the tuning plug-in. DEVICES:: is the list of devices that this plug-in instance handles. + The `devices` line can contain a list, a wildcard (`\*`), and negation (`!`). If there is no `devices` line, all devices present or later attached on the system of the [replaceable]_TYPE_ are handled by the plug-in instance. This is same as using the [option]`devices=*` option. + .Matching block devices with a plug-in ==== The following example matches all block devices starting with `sd`, such as `sda` or `sdb`, and does not disable barriers on them: ---- [data_disk] type=disk devices=sd* disable_barriers=false ---- The following example matches all block devices except `sda1` and `sda2`: ---- [data_disk] type=disk devices=!sda1, !sda2 disable_barriers=false ---- ==== If no instance of a plug-in is specified, the plug-in is not enabled. If the plug-in supports more options, they can be also specified in the plug-in section. If the option is not specified and it was not previously specified in the included plug-in, the default value is used. [discrete] == Short plug-in syntax If you do not need custom names for the plug-in instance and there is only one definition of the instance in your configuration file, *TuneD* supports the following short syntax: [subs=quotes] ---- [_TYPE_] devices=_DEVICES_ ---- In this case, it is possible to omit the `type` line. The instance is then referred to with a name, same as the type. The previous example could be then rewritten into: .Matching block devices using the short syntax ==== ---- [disk] devices=sdb* disable_barriers=false ---- ==== [discrete] == Conflicting plug-in definitions in a profile If the same section is specified more than once using the `include` option, the settings are merged. If they cannot be merged due to a conflict, the last conflicting definition overrides the previous settings. If you do not know what was previously defined, you can use the [option]`replace` Boolean option and set it to `true`. This causes all the previous definitions with the same name to be overwritten and the merge does not happen. You can also disable the plug-in by specifying the [option]`enabled=false` option. This has the same effect as if the instance was never defined. Disabling the plug-in is useful if you are redefining the previous definition from the [option]`include` option and do not want the plug-in to be active in your custom profile. NOTE:: *TuneD* includes the ability to run any shell command as part of enabling or disabling a tuning profile. This enables you to extend *TuneD* profiles with functionality that has not been integrated into TuneD yet. + You can specify arbitrary shell commands using the `script` plug-in. [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page 07070100000034000081A40000000000000000000000016391BC3A000003CE000000000000000000000000000000000000005300000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_tuned-profiles.adoc:_module-type: CONCEPT [id="tuned-profiles_{context}"] = TuneD profiles [role="_abstract"] A detailed analysis of a system can be very time-consuming. *TuneD* provides a number of predefined profiles for typical use cases. You can also create, modify, and delete profiles. The profiles provided with *TuneD* are divided into the following categories: * Power-saving profiles * Performance-boosting profiles The performance-boosting profiles include profiles that focus on the following aspects: * Low latency for storage and network * High throughput for storage and network * Virtual machine performance * Virtualization host performance [discrete] == Syntax of profile configuration The `tuned.conf` file can contain one `[main]` section and other sections for configuring plug-in instances. However, all sections are optional. Lines starting with the hash sign (`#`) are comments. [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page. 07070100000035000081A40000000000000000000000016391BC3A000009F7000000000000000000000000000000000000007700000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_variables-and-built-in-functions-in-tuned-profiles.adoc[id="variables-and-built-in-functions-in-tuned-profiles_{context}"] = Variables and built-in functions in TuneD profiles Variables and built-in functions expand at run time when a *TuneD* profile is activated. Using *TuneD* variables reduces the amount of necessary typing in *TuneD* profiles. You can also: * Use various built-in functions together with *TuneD* variables * Create custom functions in Python and add them to *TuneD* in the form of plug-ins [discrete] == Variables There are no predefined variables in *TuneD* profiles. You can define your own variables by creating the `[variables]` section in a profile and using the following syntax: [subs=+quotes] ---- [variables] [replaceable]__variable_name__=[replaceable]__value__ ---- To expand the value of a variable in a profile, use the following syntax: [subs=+quotes] ---- ${[replaceable]__variable_name__} ---- .Isolating CPU cores using variables ==== In the following example, the `${isolated_cores}` variable expands to `1,2`; hence the kernel boots with the [option]`isolcpus=1,2` option: ---- [variables] isolated_cores=1,2 [bootloader] cmdline=isolcpus=${isolated_cores} ---- The variables can be specified in a separate file. For example, you can add the following lines to [filename]`tuned.conf`: [subs=+quotes] ---- [variables] include=/etc/tuned/[replaceable]_my-variables.conf_ [bootloader] cmdline=isolcpus=${isolated_cores} ---- If you add the [option]`isolated_cores=1,2` option to the [filename]`/etc/tuned/my-variables.conf` file, the kernel boots with the [option]`isolcpus=1,2` option. ==== [discrete] == Functions To call a function, use the following syntax: [subs=+quotes] ---- ${f:[replaceable]__function_name__:[replaceable]__argument_1__:[replaceable]__argument_2__} ---- To expand the directory path where the profile and the `tuned.conf` file are located, use the `PROFILE_DIR` function, which requires special syntax: ---- ${i:PROFILE_DIR} ---- .Isolating CPU cores using variables and built-in functions ==== In the following example, the `${non_isolated_cores}` variable expands to `0,3-5`, and the `cpulist_invert` built-in function is called with the `0,3-5` argument: ---- [variables] non_isolated_cores=0,3-5 [bootloader] cmdline=isolcpus=${f:cpulist_invert:${non_isolated_cores}} ---- The `cpulist_invert` function inverts the list of CPUs. For a 6-CPU machine, the inversion is `1,2`, and the kernel boots with the [option]`isolcpus=1,2` command-line option. ==== .Additional resources * The `tuned.conf(5)` man page 07070100000036000081A40000000000000000000000016391BC3A000005B1000000000000000000000000000000000000006000000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/con_variables-in-tuned-profiles.adoc:_module-type: CONCEPT [id="variables-in-tuned-profiles_{context}"] = Variables in TuneD profiles [role="_abstract"] Variables expand at run time when a *TuneD* profile is activated. Using *TuneD* variables reduces the amount of necessary typing in *TuneD* profiles. There are no predefined variables in *TuneD* profiles. You can define your own variables by creating the `[variables]` section in a profile and using the following syntax: [subs=+quotes] ---- [variables] [replaceable]__variable_name__=[replaceable]__value__ ---- To expand the value of a variable in a profile, use the following syntax: [subs=+quotes] ---- ${[replaceable]__variable_name__} ---- .Isolating CPU cores using variables ==== In the following example, the `${isolated_cores}` variable expands to `1,2`; hence the kernel boots with the [option]`isolcpus=1,2` option: ---- [variables] isolated_cores=1,2 [bootloader] cmdline=isolcpus=${isolated_cores} ---- The variables can be specified in a separate file. For example, you can add the following lines to [filename]`tuned.conf`: [subs=+quotes] ---- [variables] include=/etc/tuned/[replaceable]_my-variables.conf_ [bootloader] cmdline=isolcpus=${isolated_cores} ---- If you add the [option]`isolated_cores=1,2` option to the [filename]`/etc/tuned/my-variables.conf` file, the kernel boots with the [option]`isolcpus=1,2` option. ==== [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page 07070100000037000081A40000000000000000000000016391BC3A000006CC000000000000000000000000000000000000006100000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_creating-new-tuned-profiles.adoc:_module-type: PROCEDURE [id="creating-new-tuned-profiles_{context}"] = Creating new TuneD profiles [role="_abstract"] This procedure creates a new *TuneD* profile with custom performance rules. .Prerequisites ifndef::pantheonenv[] * The `tuned` service is running. See xref:installing-and-enabling-tuned_getting-started-with-tuned[Installing and Enabling Tuned] for details. endif::[] ifdef::pantheonenv[] * The `tuned` service is running. See xref:modules/performance/proc_installing-and-enabling-tuned.adoc[Installing and Enabling Tuned] for details. endif::[] .Procedure . In the [filename]`/etc/tuned/` directory, create a new directory named the same as the profile that you want to create: + [subs=+quotes] ---- # mkdir /etc/tuned/[replaceable]_my-profile_ ---- . In the new directory, create a file named [filename]`tuned.conf`. Add a `[main]` section and plug-in definitions in it, according to your requirements. + For example, see the configuration of the `balanced` profile: + ---- [main] summary=General non-specialized TuneD profile [cpu] governor=conservative energy_perf_bias=normal [audio] timeout=10 [video] radeon_powersave=dpm-balanced, auto [scsi_host] alpm=medium_power ---- . To activate the profile, use: + [subs=+quotes] ---- # tuned-adm profile [replaceable]_my-profile_ ---- . Verify that the *TuneD* profile is active and the system settings are applied: + [subs=+quotes] ---- $ tuned-adm active Current active profile: [replaceable]_my-profile_ ---- + ---- $ tuned-adm verify Verfication succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details. ---- [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page 07070100000038000081A40000000000000000000000016391BC3A0000036E000000000000000000000000000000000000005500000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_disabling-tuned.adoc:_module-type: PROCEDURE [id="disabling-tuned_{context}"] = Disabling TuneD [role="_abstract"] This procedure disables *TuneD* and resets all affected system settings to their original state before *TuneD* modified them. // .Prerequisites // // * A bulleted list of conditions that must be satisfied before the user starts following this assembly. // * You can also link to other modules or assemblies the user must follow before starting this assembly. // * Delete the section title and bullets if the assembly has no prerequisites. .Procedure * To disable all tunings temporarily: + ---- # tuned-adm off ---- + The tunings are applied again after the `tuned` service restarts. * Alternatively, to stop and disable the `tuned` service permanently: + ---- # systemctl disable --now tuned ---- [role="_additional-resources"] .Additional resources * `tuned-adm(8)` man page 07070100000039000081A40000000000000000000000016391BC3A000004EF000000000000000000000000000000000000006300000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_installing-and-enabling-tuned.adoc:_module-type: PROCEDURE [id="installing-and-enabling-tuned_{context}"] = Installing and enabling TuneD [role="_abstract"] This procedure installs and enables the *TuneD* application, installs *TuneD* profiles, and presets a default *TuneD* profile for your system. // .Prerequisites // // * A bulleted list of conditions that must be satisfied before the user starts following this assembly. // * You can also link to other modules or assemblies the user must follow before starting this assembly. // * Delete the section title and bullets if the assembly has no prerequisites. .Procedure . Install the [package]`tuned` package: + ---- # yum install tuned ---- . Enable and start the `tuned` service: + ---- # systemctl enable --now tuned ---- . Optionally, install *TuneD* profiles for real-time systems: + ---- # yum install tuned-profiles-realtime tuned-profiles-nfv ---- . Verify that a *TuneD* profile is active and applied: + [subs=+quotes] ---- $ tuned-adm active Current active profile: [replaceable]_balanced_ ---- + ---- $ tuned-adm verify Verfication succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details. ---- // .Additional resources // // * The `tuned-adm(8)` man page. 0707010000003A000081A40000000000000000000000016391BC3A0000068E000000000000000000000000000000000000006600000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_listing-available-tuned-profiles.adoc:_module-type: PROCEDURE [id="listing-available-tuned-profiles_{context}"] = Listing available TuneD profiles [role="_abstract"] This procedure lists all *TuneD* profiles that are currently available on your system. //No prerequisites are needed //// .Prerequisites * The `tuned` service is running. See xref:installing-and-enabling-tuned_{context}[] for details. //// .Procedure * To list all available *TuneD* profiles on your system, use: + [subs="+quotes",options="+nowrap",role=white-space-pre] ---- $ *tuned-adm list* Available profiles: - balanced - General non-specialized tuned profile - desktop - Optimize for the desktop use-case - latency-performance - Optimize for deterministic performance at the cost of increased power consumption - network-latency - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance - network-throughput - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks - powersave - Optimize for low power consumption - throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: [replaceable]_balanced_ ---- * To display only the currently active profile, use: + [subs=+quotes] ---- $ *tuned-adm active* Current active profile: [replaceable]_balanced_ ---- [role="_additional-resources"] .Additional resources * The `tuned-adm(8)` man page. 0707010000003B000081A40000000000000000000000016391BC3A000009C6000000000000000000000000000000000000006700000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_modifying-existing-tuned-profiles.adoc:_module-type: PROCEDURE [id="modifying-existing-tuned-profiles_{context}"] = Modifying existing TuneD profiles [role="_abstract"] This procedure creates a modified child profile based on an existing *TuneD* profile. .Prerequisites ifndef::pantheonenv[] * The `tuned` service is running. See xref:installing-and-enabling-tuned_getting-started-with-tuned[Installing and Enabling Tuned] for details. endif::[] ifdef::pantheonenv[] * The `tuned` service is running. See xref:modules/performance/proc_installing-and-enabling-tuned.adoc[Installing and Enabling Tuned] for details. endif::[] .Procedure . In the [filename]`/etc/tuned/` directory, create a new directory named the same as the profile that you want to create: + [subs=+quotes] ---- # mkdir /etc/tuned/[replaceable]_modified-profile_ ---- . In the new directory, create a file named [filename]`tuned.conf`, and set the `[main]` section as follows: + [subs=+quotes] ---- [main] include=[replaceable]_parent-profile_ ---- + Replace [replaceable]_parent-profile_ with the name of the profile you are modifying. . Include your profile modifications. + -- .Lowering swappiness in the throughput-performance profile ==== To use the settings from the `throughput-performance` profile and change the value of `vm.swappiness` to 5, instead of the default 10, use: ---- [main] include=throughput-performance [sysctl] vm.swappiness=5 ---- ==== -- . To activate the profile, use: + [subs=+quotes] ---- # tuned-adm profile [replaceable]_modified-profile_ ---- . Verify that the *TuneD* profile is active and the system settings are applied: + [subs=+quotes] ---- $ tuned-adm active Current active profile: [replaceable]_my-profile_ ---- + ---- $ tuned-adm verify Verfication succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details. ---- // .An alternative approach // . Alternatively, copy the directory with a system profile from /user/lib/tuned/ to /etc/tuned/. For example: // + // ---- // # cp -r /usr/lib/tuned/throughput-performance /etc/tuned // ---- // // . Then, edit the profile in /etc/tuned according to your needs. Note that if there are two profiles of the same name, the profile located in /etc/tuned/ is loaded. The disadvantage of this approach is that if a system profile is updated after a TuneD upgrade, the changes will not be reflected in the now-outdated modified version. [role="_additional-resources"] .Additional resources * `tuned.conf(5)` man page 0707010000003C000081A40000000000000000000000016391BC3A00000735000000000000000000000000000000000000005D00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_setting-a-tuned-profile.adoc:_module-type: PROCEDURE [id="setting-a-tuned-profile_{context}"] = Setting a TuneD profile [role="_abstract"] This procedure activates a selected *TuneD* profile on your system. .Prerequisites ifndef::pantheonenv[] * The `tuned` service is running. See xref:installing-and-enabling-tuned_getting-started-with-tuned[Installing and Enabling Tuned] for details. endif::[] ifdef::pantheonenv[] * The `tuned` service is running. See xref:modules/performance/proc_installing-and-enabling-tuned.adoc[Installing and Enabling Tuned] for details. endif::[] .Procedure . Optionally, you can let *TuneD* recommend the most suitable profile for your system: + [subs=+quotes] ---- # tuned-adm recommend [replaceable]_balanced_ ---- . Activate a profile: + [subs=+quotes] ---- # tuned-adm profile [replaceable]_selected-profile_ ---- + Alternatively, you can activate a combination of multiple profiles: + [subs=+quotes] ---- # tuned-adm profile [replaceable]_profile1_ [replaceable]_profile2_ ---- + .A virtual machine optimized for low power consumption ==== The following example optimizes the system to run in a virtual machine with the best performance and concurrently tunes it for low power consumption, while the low power consumption is the priority: ---- # tuned-adm profile virtual-guest powersave ---- ==== . View the current active *TuneD* profile on your system: + [subs=+quotes] ---- # tuned-adm active Current active profile: [replaceable]_selected-profile_ ---- . Reboot the system: + ---- # reboot ---- .Verification steps * Verify that the *TuneD* profile is active and applied: + ---- $ tuned-adm verify Verfication succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details. ---- [role="_additional-resources"] .Additional resources * `tuned-adm(8)` man page 0707010000003D000081A40000000000000000000000016391BC3A000018CC000000000000000000000000000000000000006C00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/proc_setting-the-disk-scheduler-using-tuned.adoc:_module-type: PROCEDURE [id="setting-the-disk-scheduler-using-tuned_{context}"] = Setting the disk scheduler using TuneD [role="_abstract"] This procedure creates and enables a *TuneD* profile that sets a given disk scheduler for selected block devices. The setting persists across system reboots. In the following commands and configuration, replace: * [replaceable]_device_ with the name of the block device, for example `sdf` * [replaceable]_selected-scheduler_ with the disk scheduler that you want to set for the device, for example `bfq` .Prerequisites // Use an xref if we're inside the performance title or the upstream Tuned manual. Wrap performance title inside pantheonenv[] ifndef. Use pantheonenv[] ifdef to use correct xref syntax for PV2. ifndef::pantheonenv[] ifdef::performance-title[] :installing-tuned-link: pass:macros[xref:installing-and-enabling-tuned_getting-started-with-tuned[Installing and enabling Tuned]] endif::[] endif::[] ifdef::pantheonenv[] :installing-tuned-link: pass:macros[xref:modules/performance/proc_installing-and-enabling-tuned.adoc[Installing and enabling Tuned]] endif::[] ifdef::upstream[] :installing-tuned-link: pass:macros[xref:installing-and-enabling-tuned_getting-started-with-tuned[]] endif::[] // Use a link elsewhere. ifndef::performance-title[] :installing-tuned-link: pass:macros[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance#installing-and-enabling-tuned_getting-started-with-tuned] endif::[] * The `tuned` service is installed and enabled. For details, see {installing-tuned-link}. .Procedure // Use an xref if we're inside the performance title or the upstream Tuned manual. Wrap performance title inside pantheonenv[] ifndef. Use pantheonenv[] ifdef to use correct xref syntax for PV2. ifndef::pantheonenv[] ifdef::performance-title[] :tuned-profiles-link: pass:macros[xref:tuned-profiles-distributed-with-rhel_getting-started-with-tuned[Tuned profiles distributed with RHEL]] endif::[] endif::[] ifdef::pantheonenv[] :tuned-profiles-link: pass:macros[xref:modules/performance/ref_tuned-profiles-distributed-with-rhel.adoc[Tuned profiles distributed with RHEL]] endif::[] ifdef::upstream[] :tuned-profiles-link: pass:macros[xref:tuned-profiles-distributed-with-rhel_getting-started-with-tuned[]] endif::[] // Use a link elsewhere. ifndef::performance-title[] :tuned-profiles-link: pass:macros[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/getting-started-with-tuned_monitoring-and-managing-system-status-and-performance#tuned-profiles-distributed-with-rhel_getting-started-with-tuned] endif::[] . Optional: Select an existing *TuneD* profile on which your profile will be based. For a list of available profiles, see {tuned-profiles-link}. + To see which profile is currently active, use: + ---- $ tuned-adm active ---- . Create a new directory to hold your *TuneD* profile: + [subs=+quotes] ---- # mkdir /etc/tuned/[replaceable]__my-profile__ ---- . Find the system unique identifier of the selected block device: + [subs=+quotes] ---- $ udevadm info --query=property --name=/dev/_device_ | grep -E '(WWN|SERIAL)' ID_WWN=_0x5002538d00000000__ ID_SERIAL=_Generic-_SD_MMC_20120501030900000-0:0_ ID_SERIAL_SHORT=_20120501030900000_ ---- + [NOTE] ==== The command in the this example will return all values identified as a World Wide Name (WWN) or serial number associated with the specified block device. Although it is preferred to use a WWN, the WWN is not always available for a given device and any values returned by the example command are acceptable to use as the _device system unique ID_. ==== . Create the `/etc/tuned/_my-profile_/tuned.conf` configuration file. In the file, set the following options: .. Optional: Include an existing profile: + [subs=+quotes] ---- [main] include=_existing-profile_ ---- .. Set the selected disk scheduler for the device that matches the WWN identifier: + [subs=+quotes] ---- [disk] devices_udev_regex=_IDNAME_=_device system unique id_ elevator=_selected-scheduler_ ---- + Here: * Replace _IDNAME_ with the name of the identifier being used (for example, `ID_WWN`). * Replace _device system unique id_ with the value of the chosen identifier (for example, `0x5002538d00000000`). + To match multiple devices in the `devices_udev_regex` option, enclose the identifiers in parentheses and separate them with vertical bars: + [subs=+quotes] ---- devices_udev_regex=(ID_WWN=_0x5002538d00000000_)|(ID_WWN=_0x1234567800000000_) ---- . Enable your profile: + [subs=+quotes] ---- # tuned-adm profile [replaceable]__my-profile__ ---- .Verification steps . Verify that the TuneD profile is active and applied: + [subs=+quotes] ---- $ tuned-adm active Current active profile: [replaceable]_my-profile_ ---- + ---- $ tuned-adm verify Verification succeeded, current system settings match the preset profile. See TuneD log file ('/var/log/tuned/tuned.log') for details. ---- [role="_additional-resources"] .Additional resources // Use an xref if we're inside the performance title or the upstream Tuned manual. Wrap performance title inside pantheonenv[] ifndef. Use pantheonenv[] ifdef to use correct xref syntax for PV2. ifndef::pantheonenv[] ifdef::performance-title[] :customizing-tuned-link: pass:macros[xref:customizing-tuned-profiles_monitoring-and-managing-system-status-and-performance[Customizing Tuned profiles]] endif::[] endif::[] ifdef::pantheonenv[] :customizing-tuned-link: pass:macros[xref:assemblies/assembly_customizing-tuned-profiles.adoc[Customizing Tuned Profiles]] endif::[] ifdef::upstream[] :customizing-tuned-link: pass:macros[xref:customizing-tuned-profiles_monitoring-and-managing-system-status-and-performance[Customizing Tuned profiles]] endif::[] // Use a link elsewhere. ifndef::performance-title[] :customizing-tuned-link: pass:macros[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/customizing-tuned-profiles_monitoring-and-managing-system-status-and-performance] endif::[] //* For more information on creating a *TuneD* profile, see * {customizing-tuned-link}. 0707010000003E000081A40000000000000000000000016391BC3A0000280A000000000000000000000000000000000000005D00000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/ref_available-tuned-plug-ins.adoc:_module-type: REFERENCE [id="available-tuned-plug-ins_{context}"] = Available TuneD plug-ins [role="_abstract"] This section lists all monitoring and tuning plug-ins currently available in *TuneD*. [discrete] == Monitoring plug-ins Currently, the following monitoring plug-ins are implemented: `disk`:: Gets disk load (number of IO operations) per device and measurement interval. `net`:: Gets network load (number of transferred packets) per network card and measurement interval. `load`:: Gets CPU load per CPU and measurement interval. [discrete] == Tuning plug-ins Currently, the following tuning plug-ins are implemented. Only some of these plug-ins implement dynamic tuning. Options supported by plug-ins are also listed: `cpu`:: Sets the CPU governor to the value specified by the [option]`governor` option and dynamically changes the Power Management Quality of Service (PM QoS) CPU Direct Memory Access (DMA) latency according to the CPU load. + If the CPU load is lower than the value specified by the [option]`load_threshold` option, the latency is set to the value specified by the [option]`latency_high` option, otherwise it is set to the value specified by [option]`latency_low`. + You can also force the latency to a specific value and prevent it from dynamically changing further. To do so, set the [option]`force_latency` option to the required latency value. `eeepc_she`:: Dynamically sets the front-side bus (FSB) speed according to the CPU load. + This feature can be found on some netbooks and is also known as the ASUS Super Hybrid Engine (SHE). + If the CPU load is lower or equal to the value specified by the [option]`load_threshold_powersave` option, the plug-in sets the FSB speed to the value specified by the [option]`she_powersave` option. If the CPU load is higher or equal to the value specified by the [option]`load_threshold_normal` option, it sets the FSB speed to the value specified by the [option]`she_normal` option. + Static tuning is not supported and the plug-in is transparently disabled if *TuneD* does not detect the hardware support for this feature. `net`:: Configures the Wake-on-LAN functionality to the values specified by the [option]`wake_on_lan` option. It uses the same syntax as the `ethtool` utility. It also dynamically changes the interface speed according to the interface utilization. `sysctl`:: Sets various `sysctl` settings specified by the plug-in options. + The syntax is ``[replaceable]__name__=[replaceable]__value__``, where [replaceable]_name_ is the same as the name provided by the `sysctl` utility. + Use the `sysctl` plug-in if you need to change system settings that are not covered by other plug-ins available in *TuneD*. If the settings are covered by some specific plug-ins, prefer these plug-ins. `usb`:: Sets autosuspend timeout of USB devices to the value specified by the [option]`autosuspend` parameter. + The value `0` means that autosuspend is disabled. `vm`:: Enables or disables transparent huge pages depending on the Boolean value of the [option]`transparent_hugepages` option. + Valid values of the [option]`transparent_hugepages` option are: + -- * "always" * "never" * "madvise" -- `audio`:: Sets the autosuspend timeout for audio codecs to the value specified by the [option]`timeout` option. + Currently, the `snd_hda_intel` and `snd_ac97_codec` codecs are supported. The value `0` means that the autosuspend is disabled. You can also enforce the controller reset by setting the Boolean option [option]`reset_controller` to `true`. `disk`:: Sets the disk elevator to the value specified by the [option]`elevator` option. + It also sets: + -- * APM to the value specified by the [option]`apm` option * Scheduler quantum to the value specified by the [option]`scheduler_quantum` option * Disk spindown timeout to the value specified by the [option]`spindown` option * Disk readahead to the value specified by the [option]`readahead` parameter * The current disk readahead to a value multiplied by the constant specified by the [option]`readahead_multiply` option -- + In addition, this plug-in dynamically changes the advanced power management and spindown timeout setting for the drive according to the current drive utilization. The dynamic tuning can be controlled by the Boolean option [option]`dynamic` and is enabled by default. `scsi_host`:: Tunes options for SCSI hosts. + It sets Aggressive Link Power Management (ALPM) to the value specified by the [option]`alpm` option. `mounts`:: Enables or disables barriers for mounts according to the Boolean value of the [option]`disable_barriers` option. `scheduler`:: Allows tuning of scheduling priorities, processes/threads/IRQs affinities, and CPU cores isolation. + The scheduler plugin uses perf event loop to catch newly created processes. By default it listens to perf.RECORD_COMM and perf.RECORD_EXIT events. By setting `perf_process_fork` parameter to `true`, perf.RECORD_FORK events will be also listened to. In other words, child processes created by the fork() system call will be processed. Since child processes inherit CPU affinity from their parents, the `scheduler` plugin usually does not need to explicitly process these events. As processing perf events can pose a significant CPU overhead, the `perf_process_fork` parameter is set to `false` by default and child processes are not processed by the scheduler plugin. + For perf events mmapped buffer is used. Under heavy load the buffer may overflow. In such cases the `scheduler` plugin may start missing events and not process some newly created processes. Increasing the buffer size may help. The buffer size can be set with the `perf_mmap_pages` parameter. Value of this parameter has to be power of 2. If it is not the power of 2, the nearest bigger power of 2 value is calculated from it and this calculated value is used. If the `perf_mmap_pages` parameter is omitted, the default kernel value is used, which should be 128 for the recent kernels (tested on kernel-5.9.8). + The `default_irq_smp_affinity` parameter controls the values *TuneD* writes to `/proc/irq/default_smp_affinity`. The following values are supported: + -- `calc`:: Content of `/proc/irq/default_smp_affinity` will be calculated from the `isolated_cores` parameter. Non-isolated cores are calculated as an inversion of the `isolated_cores`. Then the intersection of the non-isolated cores and the previous content of `/proc/irq/default_smp_affinity` is written to `/proc/irq/default_smp_affinity`. If the intersection is an empty set, then just the non-isolated cores are written to `/proc/irq/default_smp_affinity`. This behavior is the default if the parameter `default_irq_smp_affinity` is omitted. `ignore`:: *TuneD* will not touch `/proc/irq/default_smp_affinity`. cpulist such as `1,3-4`:: The cpulist is unpacked and written directly to `/proc/irq/default_smp_affinity`. -- `script`:: Executes an external script or binary when the profile is loaded or unloaded. You can choose an arbitrary executable. + IMPORTANT: The `script` plug-in is provided mainly for compatibility with earlier releases. Prefer other *TuneD* plug-ins if they cover the required functionality. + *TuneD* calls the executable with one of the following arguments: + -- ** `start` when loading the profile ** `stop` when unloading the profile -- + You need to correctly implement the `stop` action in your executable and revert all settings that you changed during the `start` action. Otherwise, the roll-back step after changing your *TuneD* profile will not work. + Bash scripts can import the [filename]`/usr/lib/tuned/functions` Bash library and use the functions defined there. Use these functions only for functionality that is not natively provided by *TuneD*. If a function name starts with an underscore, such as `_wifi_set_power_level`, consider the function private and do not use it in your scripts, because it might change in the future. + Specify the path to the executable using the `script` parameter in the plug-in configuration. + .Running a Bash script from a profile ==== To run a Bash script named `script.sh` that is located in the profile directory, use: ---- [script] script=${i:PROFILE_DIR}/script.sh ---- ==== `sysfs`:: Sets various `sysfs` settings specified by the plug-in options. + The syntax is ``[replaceable]__name__=[replaceable]__value__``, where [replaceable]_name_ is the `sysfs` path to use. + Use this plugin in case you need to change some settings that are not covered by other plug-ins. Prefer specific plug-ins if they cover the required settings. `video`:: Sets various powersave levels on video cards. Currently, only the Radeon cards are supported. + The powersave level can be specified by using the [option]`radeon_powersave` option. Supported values are: + -- * `default` * `auto` * `low` * `mid` * `high` * `dynpm` * `dpm-battery` * `dpm-balanced` * `dpm-perfomance` -- + For details, see link:http://www.x.org/wiki/RadeonFeature#KMS_Power_Management_Options[www.x.org]. Note that this plug-in is experimental and the option might change in future releases. `bootloader`:: Adds options to the kernel command line. This plug-in supports only the GRUB 2 boot loader. + Customized non-standard location of the GRUB 2 configuration file can be specified by the [option]`grub2_cfg_file` option. + The kernel options are added to the current GRUB configuration and its templates. The system needs to be rebooted for the kernel options to take effect. + Switching to another profile or manually stopping the `tuned` service removes the additional options. If you shut down or reboot the system, the kernel options persist in the [filename]`grub.cfg` file. + The kernel options can be specified by the following syntax: + [subs=+quotes] ---- cmdline=[replaceable]_arg1_ [replaceable]_arg2_ ... [replaceable]_argN_ ---- + -- .Modifying the kernel command line ==== For example, to add the [option]`quiet` kernel option to a *TuneD* profile, include the following lines in the [filename]`tuned.conf` file: ---- [bootloader] cmdline=quiet ---- The following is an example of a custom profile that adds the [option]`isolcpus=2` option to the kernel command line: ---- [bootloader] cmdline=isolcpus=2 ---- ==== -- 0707010000003F000081A40000000000000000000000016391BC3A0000074B000000000000000000000000000000000000007300000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/ref_built-in-functions-available-in-tuned-profiles.adoc:_module-type: REFERENCE [id="built-in-functions-available-in-tuned-profiles_{context}"] = Built-in functions available in TuneD profiles [role="_abstract"] The following built-in functions are available in all *TuneD* profiles: `PROFILE_DIR`:: Returns the directory path where the profile and the `tuned.conf` file are located. `exec`:: Executes a process and returns its output. `assertion`:: Compares two arguments. If they _do not match_, the function logs text from the first argument and aborts profile loading. `assertion_non_equal`:: Compares two arguments. If they _match_, the function logs text from the first argument and aborts profile loading. `kb2s`:: Converts kilobytes to disk sectors. `s2kb`:: Converts disk sectors to kilobytes. `strip`:: Creates a string from all passed arguments and deletes both leading and trailing white space. `virt_check`:: Checks whether *TuneD* is running inside a virtual machine (VM) or on bare metal: + * Inside a VM, the function returns the first argument. * On bare metal, the function returns the second argument, even in case of an error. `cpulist_invert`:: Inverts a list of CPUs to make its complement. For example, on a system with 4 CPUs, numbered from 0 to 3, the inversion of the list `0,2,3` is `1`. `cpulist2hex`:: Converts a CPU list to a hexadecimal CPU mask. `cpulist2hex_invert`:: Converts a CPU list to a hexadecimal CPU mask and inverts it. `hex2cpulist`:: Converts a hexadecimal CPU mask to a CPU list. `cpulist_online`:: Checks whether the CPUs from the list are online. Returns the list containing only online CPUs. `cpulist_present`:: Checks whether the CPUs from the list are present. Returns the list containing only present CPUs. `cpulist_unpack`:: Unpacks a CPU list in the form of `1-3,4` to `1,2,3,4`. `cpulist_pack`:: Packs a CPU list in the form of `1,2,3,5` to `1-3,5`. 07070100000040000081A40000000000000000000000016391BC3A000003CD000000000000000000000000000000000000007300000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/ref_real-time-tuned-profiles-distributed-with-rhel.adoc:_module-type: REFERENCE [id="real-time-tuned-profiles-distributed-with-rhel_{context}"] = Real-time TuneD profiles distributed with RHEL [role="_abstract"] Real-time profiles are intended for systems running the real-time kernel. Without a special kernel build, they do not configure the system to be real-time. On RHEL, the profiles are available from additional repositories. The following real-time profiles are available: `realtime`:: Use on bare-metal real-time systems. + Provided by the [package]`tuned-profiles-realtime` package, which is available from the RT or NFV repositories. `realtime-virtual-host`:: Use in a virtualization host configured for real-time. + Provided by the [package]`tuned-profiles-nfv-host` package, which is available from the NFV repository. `realtime-virtual-guest`:: Use in a virtualization guest configured for real-time. + Provided by the [package]`tuned-profiles-nfv-guest` package, which is available from the NFV repository. 07070100000041000081A40000000000000000000000016391BC3A00001F49000000000000000000000000000000000000006900000000tuned-2.19.0.29+git.b894a3e/doc/manual/modules/performance/ref_tuned-profiles-distributed-with-rhel.adoc:_module-type: CONCEPT [id="tuned-profiles-distributed-with-rhel_{context}"] = TuneD profiles distributed with RHEL [role="_abstract"] The following is a list of profiles that are installed with *TuneD* on {RHEL}. NOTE: There might be more product-specific or third-party *TuneD* profiles available. Such profiles are usually provided by separate RPM packages. `balanced`:: The default power-saving profile. It is intended to be a compromise between performance and power consumption. It uses auto-scaling and auto-tuning whenever possible. The only drawback is the increased latency. In the current *TuneD* release, it enables the CPU, disk, audio, and video plugins, and activates the `conservative` CPU governor. The `radeon_powersave` option uses the `dpm-balanced` value if it is supported, otherwise it is set to `auto`. + It changes the `energy_performance_preference` attribute to the `normal` energy setting. It also changes the `scaling_governor` policy attribute to either the `conservative` or `powersave` CPU governor. `powersave`:: A profile for maximum power saving performance. It can throttle the performance in order to minimize the actual power consumption. In the current *TuneD* release it enables USB autosuspend, WiFi power saving, and Aggressive Link Power Management (ALPM) power savings for SATA host adapters. It also schedules multi-core power savings for systems with a low wakeup rate and activates the `ondemand` governor. It enables AC97 audio power saving or, depending on your system, HDA-Intel power savings with a 10 seconds timeout. If your system contains a supported Radeon graphics card with enabled KMS, the profile configures it to automatic power saving. On ASUS Eee PCs, a dynamic Super Hybrid Engine is enabled. + It changes the `energy_performance_preference` attribute to the `powersave` or `power` energy setting. It also changes the `scaling_governor` policy attribute to either the `ondemand` or `powersave` CPU governor. + [NOTE] -- In certain cases, the `balanced` profile is more efficient compared to the `powersave` profile. Consider there is a defined amount of work that needs to be done, for example a video file that needs to be transcoded. Your machine might consume less energy if the transcoding is done on the full power, because the task is finished quickly, the machine starts to idle, and it can automatically step-down to very efficient power save modes. On the other hand, if you transcode the file with a throttled machine, the machine consumes less power during the transcoding, but the process takes longer and the overall consumed energy can be higher. That is why the `balanced` profile can be generally a better option. -- `throughput-performance`:: A server profile optimized for high throughput. It disables power savings mechanisms and enables `sysctl` settings that improve the throughput performance of the disk and network IO. CPU governor is set to `performance`. + It changes the `energy_performance_preference` and `scaling_governor` attribute to the `performance` profile. `accelerator-performance`:: The `accelerator-performance` profile contains the same tuning as the `throughput-performance` profile. Additionally, it locks the CPU to low C states so that the latency is less than 100us. This improves the performance of certain accelerators, such as GPUs. `latency-performance`:: A server profile optimized for low latency. It disables power savings mechanisms and enables `sysctl` settings that improve latency. CPU governor is set to `performance` and the CPU is locked to the low C states (by PM QoS). + It changes the `energy_performance_preference` and `scaling_governor` attribute to the `performance` profile. `network-latency`:: A profile for low latency network tuning. It is based on the `latency-performance` profile. It additionally disables transparent huge pages and NUMA balancing, and tunes several other network-related `sysctl` parameters. + It inherits the `latency-performance` profile which changes the `energy_performance_preference` and `scaling_governor` attribute to the `performance` profile. `hpc-compute`:: A profile optimized for high-performance computing. It is based on the `latency-performance` profile. `network-throughput`:: A profile for throughput network tuning. It is based on the `throughput-performance` profile. It additionally increases kernel network buffers. + It inherits either the `latency-performance` or `throughput-performance` profile, and changes the `energy_performance_preference` and `scaling_governor` attribute to the `performance` profile. `virtual-guest`:: A profile designed for virtual guests based on the `throughput-performance` profile that, among other tasks, decreases virtual memory swappiness and increases disk readahead values. It does not disable disk barriers. + It inherits the `throughput-performance` profile and changes the `energy_performance_preference` and `scaling_governor` attribute to the `performance` profile. `virtual-host`:: A profile designed for virtual hosts based on the `throughput-performance` profile that, among other tasks, decreases virtual memory swappiness, increases disk readahead values, and enables a more aggressive value of dirty pages writeback. + It inherits the `throughput-performance` profile and changes the `energy_performance_preference` and `scaling_governor` attribute to the `performance` profile. `oracle`:: A profile optimized for Oracle databases loads based on `throughput-performance` profile. It additionally disables transparent huge pages and modifies other performance-related kernel parameters. This profile is provided by the [package]`tuned-profiles-oracle` package. `desktop`:: A profile optimized for desktops, based on the `balanced` profile. It additionally enables scheduler autogroups for better response of interactive applications. `cpu-partitioning`:: The `cpu-partitioning` profile partitions the system CPUs into isolated and housekeeping CPUs. To reduce jitter and interruptions on an isolated CPU, the profile clears the isolated CPU from user-space processes, movable kernel threads, interrupt handlers, and kernel timers. + A housekeeping CPU can run all services, shell processes, and kernel threads. + You can configure the `cpu-partitioning` profile in `/etc/tuned/cpu-partitioning-variables.conf` file. The configuration options are: + -- `isolated_cores=_cpu-list_`:: Lists CPUs to isolate. The list of isolated CPUs is comma-separated or the user can specify the range. You can specify a range using a dash, such as `3-5`. This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. `no_balance_cores=_cpu-list_`:: Lists CPUs which are not considered by the kernel during system wide process load-balancing. This option is optional. This is usually the same list as `isolated_cores`. -- + For more information on `cpu-partitioning`, see the `tuned-profiles-cpu-partitioning(7)` man page. `optimize-serial-console`:: A profile that tunes down I/O activity to the serial console by reducing the printk value. This should make the serial console more responsive. This profile is intended to be used as an overlay on other profiles. For example: + [subs=+quotes] ---- # tuned-adm profile throughput-performance optimize-serial-console ---- `mssql`:: A profile provided for Microsoft SQL Server. It is based on the `thoguhput-performance` profile. `postgresql`:: A profile optimized for PostgreSQL databases loads based on `throughput-performance` profile. It additionally disables transparent huge pages and modifies other performance-related kernel parameters. This profile is provided by the [package]`tuned-profiles-postgresql` package. `intel-sst`:: A profile optimized for systems with user-defined Intel Speed Select Technology configurations. This profile is intended to be used as an overlay on other profiles. For example: + [subs=+quotes] ---- # tuned-adm profile cpu-partitioning intel-sst ---- 07070100000042000041ED0000000000000000000000036391BC3A00000000000000000000000000000000000000000000002800000000tuned-2.19.0.29+git.b894a3e/experiments07070100000043000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/experiments/kwin-stop07070100000044000081A40000000000000000000000016391BC3A00000510000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/experiments/kwin-stop/xlib-example.py#!/usr/bin/python3 -Es from __future__ import print_function import os import Xlib from Xlib import X, display, Xatom dpy = display.Display() def loop(): atoms = {} wm_active_window = dpy.get_atom('_NET_ACTIVE_WINDOW') screens = dpy.screen_count() for num in range(screens): screen = dpy.screen(num) screen.root.change_attributes(event_mask=X.PropertyChangeMask) while True: ev = dpy.next_event() if ev.type == X.PropertyNotify: if ev.atom == wm_active_window: data = ev.window.get_full_property(ev.atom, 0) id = int(data.value.tolist()[0]) hidden = [] showed = [] if id != 0: for num in range(screens): root = dpy.screen(num).root for win in root.get_full_property(dpy.get_atom('_NET_CLIENT_LIST'), 0).value.tolist(): window = dpy.create_resource_object('window', win) if window.get_full_property(dpy.get_atom('_NET_WM_STATE'), Xatom.WINDOW) is None: continue if dpy.get_atom("_NET_WM_STATE_HIDDEN") in window.get_full_property(dpy.get_atom('_NET_WM_STATE'), 0).value.tolist(): if not win in hidden: hidden.append(win) else: if not win in showed: showed.append(win) print("Showed:", showed) print("Minimized:", hidden) if __name__ == '__main__': loop() 07070100000045000081A40000000000000000000000016391BC3A0000099D000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/experiments/malloc_trim_ldp.c/* * malloc_trim_ldp: A ld-preload library that can be used to potentially * save memory (especially for long running larger apps). * * Copyright (C) 2008-2013 Red Hat, Inc. * Authors: Phil Knirsch * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. * * To compile: * gcc -Wall -fPIC -shared -lpthread -o malloc_trim_ldp.o malloc_trim_ldp.c * * To install: * For a single app: * LD_PRELOAD=./malloc_trim_ldp.o application * * Systemwide: * cp malloc_trim_ldp.o /lib * echo "/lib/malloc_trim_ldp.o" >> /etc/ld.so.preload * * How it works: * This ld-preload library simply redirects the glibc free() call to a new * one that simply has a static counter and every 10.000 free() calls will * call malloc_trim(0) which goes through the heap of an application and * basically releases pages that aren't in use anymore using madvise(). * */ #include <malloc.h> #include <stdlib.h> #include <stdio.h> #include <limits.h> #include <sys/types.h> #include <unistd.h> #include <time.h> #include <errno.h> #include <pthread.h> static pthread_mutex_t mymutex = PTHREAD_MUTEX_INITIALIZER; static int malloc_trim_count=0; static void mymalloc_install (void); static void mymalloc_uninstall (void); static void (*old_free_hook) (void *, const void *); static void myfree(void *ptr, const void *caller) { pthread_mutex_lock(&mymutex); malloc_trim_count++; if(malloc_trim_count%10000 == 0) { malloc_trim(0); } mymalloc_uninstall(); free(ptr); mymalloc_install(); pthread_mutex_unlock(&mymutex); } static void mymalloc_install (void) { old_free_hook = __free_hook; __free_hook = myfree; } static void mymalloc_uninstall (void) { __free_hook = old_free_hook; } void (*__malloc_initialize_hook) (void) = mymalloc_install; 07070100000046000081ED0000000000000000000000016391BC3A00002B11000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/experiments/powertop2tuned.py#!/usr/bin/python3 -Es # -*- coding: utf-8 -*- # # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Jan Kaluža <jkaluza@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function # exception handler for python 2/3 compatibility try: from builtins import chr except ImportError: pass import os import sys import tempfile import shutil import argparse import codecs from subprocess import * # exception handler for python 2/3 compatibility try: from html.parser import HTMLParser from html.entities import name2codepoint except ImportError: from HTMLParser import HTMLParser from htmlentitydefs import name2codepoint SCRIPT_SH = """#!/bin/sh . /usr/lib/tuned/functions start() { %s return 0 } stop() { %s return 0 } process $@ """ TUNED_CONF_PROLOG = "# Automatically generated by powertop2tuned tool\n\n" TUNED_CONF_INCLUDE = """[main] %s\n """ TUNED_CONF_EPILOG="""\n[powertop_script] type=script replace=1 script=${i:PROFILE_DIR}/script.sh """ class PowertopHTMLParser(HTMLParser): def __init__(self, enable_tunings): HTMLParser.__init__(self) self.inProperTable = False self.inScript = False self.intd = False self.lastStartTag = "" self.tdCounter = 0 self.lastDesc = "" self.data = "" self.currentScript = "" if enable_tunings: self.prefix = "" else: self.prefix = "#" self.plugins = {} def getParsedData(self): return self.data def getPlugins(self): return self.plugins def handle_starttag(self, tag, attrs): self.lastStartTag = tag if self.lastStartTag == "div" and dict(attrs).get("id") in ["tuning", "wakeup"]: self.inProperTable = True if self.inProperTable and tag == "td": self.tdCounter += 1 self.intd = True def parse_command(self, command): prefix = "" command = command.strip() if command[0] == '#': prefix = "#" command = command[1:] if command.startswith("echo") and command.find("/proc/sys") != -1: splitted = command.split("'") value = splitted[1] path = splitted[3] path = path.replace("/proc/sys/", "").replace("/", ".") self.plugins.setdefault("sysctl", "[sysctl]\n") self.plugins["sysctl"] += "#%s\n%s%s=%s\n\n" % (self.lastDesc, prefix, path, value) # TODO: plugins/plugin_sysfs.py doesn't support this so far, it has to be implemented to # let it work properly. elif command.startswith("echo") and (command.find("'/sys/") != -1 or command.find("\"/sys/") != -1): splitted = command.split("'") value = splitted[1] path = splitted[3] if path in ("/sys/module/snd_hda_intel/parameters/power_save", "/sys/module/snd_ac97_codec/parameters/power_save"): self.plugins.setdefault("audio", "[audio]\n") self.plugins["audio"] += "#%s\n%stimeout=1\n" % (self.lastDesc, prefix) else: self.plugins.setdefault("sysfs", "[sysfs]\n") self.plugins["sysfs"] += "#%s\n%s%s=%s\n\n" % (self.lastDesc, prefix, path, value) elif command.startswith("ethtool -s ") and command.endswith("wol d;"): self.plugins.setdefault("net", "[net]\n") self.plugins["net"] += "#%s\n%swake_on_lan=0\n" % (self.lastDesc, prefix) else: return False return True def handle_endtag(self, tag): if self.inProperTable and tag == "table": self.inProperTable = False self.intd = False if tag == "tr": self.tdCounter = 0 self.intd = False if tag == "td": self.intd = False if self.inScript: #print self.currentScript self.inScript = False # Command is not handled, so just store it in the script if not self.parse_command(self.currentScript.split("\n")[-1]): self.data += self.currentScript + "\n\n" def handle_entityref(self, name): if self.inScript: self.currentScript += chr(name2codepoint[name]) def handle_data(self, data): prefix = self.prefix if self.inProperTable and self.intd and self.tdCounter == 1: self.lastDesc = data if self.lastDesc.lower().find("autosuspend") != -1 and (self.lastDesc.lower().find("keyboard") != -1 or self.lastDesc.lower().find("mouse") != -1): self.lastDesc += "\n# WARNING: For some devices, uncommenting this command can disable the device." prefix = "#" if self.intd and ((self.inProperTable and self.tdCounter == 2) or self.inScript): self.tdCounter = 0 if not self.inScript: self.currentScript += "\t# " + self.lastDesc + "\n" self.currentScript += "\t" + prefix + data.strip() self.inScript = True else: self.currentScript += data.strip() class PowertopProfile: BAD_PRIVS = 100 PARSING_ERROR = 101 BAD_SCRIPTSH = 102 def __init__(self, output, profile_name, name = ""): self.profile_name = profile_name self.name = name self.output = output def currentActiveProfile(self): proc = Popen(["tuned-adm", "active"], stdout=PIPE, \ universal_newlines = True) output = proc.communicate()[0] if output and output.find("Current active profile: ") == 0: return output[len("Current active profile: "):output.find("\n")] return None def checkPrivs(self): myuid = os.geteuid() if myuid != 0: print('Run this program as root', file=sys.stderr) return False return True def generateHTML(self): print("Running PowerTOP, please wait...") environment = os.environ.copy() environment["LC_ALL"] = "C" try: proc = Popen(["/usr/sbin/powertop", \ "--html=/tmp/powertop", "--time=1"], \ stdout=PIPE, stderr=PIPE, \ env=environment, \ universal_newlines = True) output = proc.communicate()[1] except (OSError, IOError): print('Unable to execute PowerTOP, is PowerTOP installed?', file=sys.stderr) return -2 if proc.returncode != 0: print('PowerTOP returned error code: %d' % proc.returncode, file=sys.stderr) return -2 prefix = "PowerTOP outputting using base filename " if output.find(prefix) == -1: # workaround for PowerTOP older than 2.13 prefix = "PowerTOP outputing using base filename " if output.find(prefix) == -1: return -1 name = output[output.find(prefix)+len(prefix):-1] #print "Parsed filename=", [name] return name def parseHTML(self, enable_tunings): f = None data = None parser = PowertopHTMLParser(enable_tunings) try: f = codecs.open(self.name, "r", "utf-8") data = f.read() except (OSError, IOError, UnicodeDecodeError): data = None if f is not None: f.close() if data is None: return "", "" parser.feed(data) return parser.getParsedData(), parser.getPlugins() def generateShellScript(self, data): print("Generating shell script", os.path.join(self.output, "script.sh")) f = None try: f = codecs.open(os.path.join(self.output, "script.sh"), "w", "utf-8") f.write(SCRIPT_SH % (data, "")) os.fchmod(f.fileno(), 0o755) f.close() except (OSError, IOError) as e: print("Error writing shell script: %s" % e, file=sys.stderr) if f is not None: f.close() return False return True def generateTunedConf(self, profile, plugins): print("Generating TuneD config file", os.path.join(self.output, "tuned.conf")) f = codecs.open(os.path.join(self.output, "tuned.conf"), "w", "utf-8") f.write(TUNED_CONF_PROLOG) if profile is not None: if self.profile_name == profile: print('New profile has same name as active profile, not including active profile (avoiding circular deps).', file=sys.stderr) else: f.write(TUNED_CONF_INCLUDE % ("include=" + profile)) for plugin in list(plugins.values()): f.write(plugin + "\n") f.write(TUNED_CONF_EPILOG) f.close() def generate(self, new_profile, merge_profile, enable_tunings): generated_html = False if len(self.name) == 0: generated_html = True if not self.checkPrivs(): return self.BAD_PRIVS name = self.generateHTML() if isinstance(name, int): return name self.name = name data, plugins = self.parseHTML(enable_tunings) if generated_html: os.unlink(self.name) if len(data) == 0 and len(plugins) == 0: print('Your Powertop version is incompatible (maybe too old) or the generated HTML output is malformed', file=sys.stderr) return self.PARSING_ERROR if new_profile is False: if merge_profile is None: profile = self.currentActiveProfile() else: profile = merge_profile else: profile = None if not os.path.exists(self.output): os.makedirs(self.output) if not self.generateShellScript(data): return self.BAD_SCRIPTSH self.generateTunedConf(profile, plugins) return 0 if __name__ == "__main__": parser = argparse.ArgumentParser(description='Creates TuneD profile from Powertop HTML output.') parser.add_argument('profile', metavar='profile_name', type=str, nargs='?', help='Name for the profile to be written.') parser.add_argument('-i', '--input', metavar='input_html', type=str, help='Path to Powertop HTML report. If not given, it is generated automatically.') parser.add_argument('-o', '--output', metavar='output_directory', type=str, help='Directory where the profile will be written, default is /etc/tuned/profile_name directory.') parser.add_argument('-n', '--new-profile', action='store_true', help='Creates new profile, otherwise it merges (include) your current profile.') parser.add_argument('-m', '--merge-profile', action = 'store', help = 'Merges (includes) the specified profile (can be suppressed by -n option).') parser.add_argument('-f', '--force', action='store_true', help='Overwrites the output directory if it already exists.') parser.add_argument('-e', '--enable', action='store_true', help='Enable all tunings (not recommended). Even with this enabled tunings known to be harmful (like USB_AUTOSUSPEND) won''t be enabled.') args = parser.parse_args() args = vars(args) if not args['profile'] and not args['output']: print('You have to specify the profile_name or output directory using the --output argument.', file=sys.stderr) parser.print_help() sys.exit(-1) if not args['output']: args['output'] = "/etc/tuned" if args['profile']: args['output'] = os.path.join(args['output'], args['profile']) if not args['input']: args['input'] = '' if os.path.exists(args['output']) and not args['force']: print('Output directory already exists, use --force to overwrite it.', file=sys.stderr) sys.exit(-1) p = PowertopProfile(args['output'], args['profile'], args['input']) sys.exit(p.generate(args['new_profile'], args['merge_profile'], args['enable'])) 07070100000047000081A40000000000000000000000016391BC3A000013EC000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/experiments/varcpuload.c/* * varcpuload: Simple tool to create reproducable artifical sustained load on * a machine. * * Copyright (C) 2008-2013 Red Hat, Inc. * Authors: Phil Knirsch * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. * * Usage: varcpuload [-t time] [-n numcpu] [LOAD | MINLOAD MAXLOAD INCREASE] * LOAD, MINLOAD and MAXLOAD need to be between 1 and 100. * * To compile: * gcc -Wall -Os varcpuload.c -o varcpuload -lpthread * * To measure load: * 1st terminal: * for i in `seq 1 2 100`; do ./varcpuload -t 55 -n `/usr/bin/getconf _NPROCESSORS_ONLN` $i; done * or better * ./varcpuload -t 60 -n `/usr/bin/getconf _NPROCESSORS_ONLN` 1 100 2; done * 2nd terminal: * rm -f results; for i in `seq 1 2 100`; do powertop -d -t 60 >> results; done * * make sure the machine is otherwise idle, so start the machine initlevel 3 or even 1 * and stop every unecessary service. * */ #include <getopt.h> #include <sys/time.h> #include <pthread.h> #include <string.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <errno.h> #include <ctype.h> #define handle_error_en(en, msg) \ do { errno = en; perror(msg); exit(EXIT_FAILURE); } while (0) #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) #define ARRSIZE 512 int sleeptime = 0; int duration = 60; int load = 100; void usage() { fprintf(stderr, "Usage: varload [-t time] [-n numcpu] [LOAD | MINLOAD MAXLOAD INCREASE]\n"); fprintf(stderr, "LOAD, MINLOAD and MAXLOAD need to be between 1 and 100.\n"); } int worker(void) { int i, j; float array[ARRSIZE][ARRSIZE]; for (i = 0; i < ARRSIZE; i++) { for (j = 0; j < ARRSIZE; j++) { array[i][j] = (float)(i + j) / (float)(i + 1); } } return (int)array[1][1]; } int timeDiff(struct timeval *tv1, struct timeval *tv2) { return (tv2->tv_sec - tv1->tv_sec) * 1000000 + tv2->tv_usec - tv1->tv_usec; } int getWorkerTime() { int cnt, i; struct timeval tv1, tv2; cnt = 0; gettimeofday(&tv1, NULL); gettimeofday(&tv2, NULL); // Warmup of 1 sec while (1000000 > timeDiff(&tv1, &tv2)) { i = worker(); usleep(1); gettimeofday(&tv2, NULL); } gettimeofday(&tv1, NULL); gettimeofday(&tv2, NULL); // Meassure for 4 sec while (4*1000000 > timeDiff(&tv1, &tv2)) { i = worker(); usleep(0); gettimeofday(&tv2, NULL); cnt++; } return timeDiff(&tv1, &tv2)/cnt; } static void * runWorker(void *arg) { int i; struct timeval tv1, tv2; gettimeofday(&tv1, NULL); gettimeofday(&tv2, NULL); while (duration > timeDiff(&tv1, &tv2)) { i = worker(); usleep(sleeptime); gettimeofday(&tv2, NULL); } return NULL; } int main(int argc, char *argv[]) { int wtime, numcpu, opt, s, i; int minload, maxload, loadinc; pthread_attr_t attr; pthread_t *tid; void *res; numcpu = 1; while ((opt = getopt(argc, argv, "t:n:")) != -1) { switch (opt) { case 't': duration = atoi(optarg); break; case 'n': numcpu = atoi(optarg); break; default: /* '?' */ usage(); exit(EXIT_FAILURE); } } loadinc = 1; switch (argc - optind) { case 0: minload = 100; maxload = 100; break; case 1: minload = atoi(argv[optind]); maxload = minload; break; case 3: minload = atoi(argv[optind]); maxload = atoi(argv[optind + 1]); loadinc = atoi(argv[optind + 2]); break; default: /* '?' */ usage(); exit(EXIT_FAILURE); } if (minload < 1 || maxload < 1 || minload > 100 || maxload > 100) { usage(); exit(EXIT_FAILURE); } wtime = getWorkerTime(); duration *= 1000000; for (load = minload; load <= maxload; load += loadinc) { sleeptime = wtime * 100 / load - wtime; printf("Starting %d sec run with\n", duration / 1000000); printf("Load: %d\n", load); printf("Worker time: %d\n", wtime); printf("Sleep time: %d\n", sleeptime); printf("Nr. of CPUs to run on: %d\n", numcpu); s = pthread_attr_init(&attr); if (s != 0) handle_error_en(s, "pthread_attr_init"); tid = malloc(sizeof(pthread_t) * numcpu); if (tid == NULL) handle_error("malloc"); for (i = 0; i<numcpu; i++) { s = pthread_create(&tid[i], &attr, &runWorker, NULL); if (s != 0) handle_error_en(s, "pthread_create"); } s = pthread_attr_destroy(&attr); if (s != 0) handle_error_en(s, "pthread_attr_destroy"); for (i = 0; i < numcpu; i++) { s = pthread_join(tid[i], &res); if (s != 0) handle_error_en(s, "pthread_join"); } free(tid); } exit(EXIT_SUCCESS); } 07070100000048000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002200000000tuned-2.19.0.29+git.b894a3e/icons07070100000049000081A40000000000000000000000016391BC3A00002510000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/icons/tuned.svg<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!-- Created with Inkscape (http://www.inkscape.org/) --> <svg xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:cc="http://creativecommons.org/ns#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:svg="http://www.w3.org/2000/svg" xmlns="http://www.w3.org/2000/svg" xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd" xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape" width="256" height="256" viewBox="0 0 256 256" id="svg4249" version="1.1" inkscape:version="0.91 r13725" sodipodi:docname="tuned_icon.svg"> <defs id="defs4251" /> <sodipodi:namedview id="base" pagecolor="#ffffff" bordercolor="#666666" borderopacity="1.0" inkscape:pageopacity="0.0" inkscape:pageshadow="2" inkscape:zoom="0.86878745" inkscape:cx="212.24024" inkscape:cy="71.338202" inkscape:document-units="px" inkscape:current-layer="layer1" showgrid="false" inkscape:window-width="1920" inkscape:window-height="1016" inkscape:window-x="0" inkscape:window-y="27" inkscape:window-maximized="1" showguides="false" units="px" inkscape:showpageshadow="false" /> <metadata id="metadata4254"> <rdf:RDF> <cc:Work rdf:about=""> <dc:format>image/svg+xml</dc:format> <dc:type rdf:resource="http://purl.org/dc/dcmitype/StillImage" /> <dc:title></dc:title> </cc:Work> </rdf:RDF> </metadata> <g inkscape:groupmode="layer" id="layer2" inkscape:label="Layer 2" style="display:inline" transform="translate(0,-796.36216)" /> <g inkscape:label="Layer 1" inkscape:groupmode="layer" id="layer1" transform="translate(0,-796.36216)"> <g style="fill:#151515;fill-opacity:1" id="g5026" transform="matrix(0.23484443,0,0,0.34837575,370.11307,51.388521)" /> <g transform="matrix(0.20338124,0.11742221,-0.17418787,0.30170225,497.97321,19.453144)" id="g5054" style="fill:#151515;fill-opacity:1" /> <g transform="matrix(0.11742222,0.20338123,-0.30170225,0.17418788,624.67183,55.729997)" id="g5060" style="fill:#151515;fill-opacity:1" /> <g style="fill:#151515;fill-opacity:1" id="g5066" transform="matrix(1.1190515e-8,0.23484442,-0.34837576,0,716.25874,150.49244)" /> <g style="fill:#151515;fill-opacity:1" id="g5072" transform="matrix(-0.1174222,0.20338124,-0.30170225,-0.17418788,748.19374,278.35399)" /> <path style="opacity:1;fill:#151515;fill-opacity:1;stroke:none;stroke-width:15;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" d="m 123.03787,855.03984 -2.11521,14.01669 c -3.85947,0.11422 -7.64154,0.53218 -11.31903,1.25327 l -5.15911,-13.15294 a 83.453544,83.453544 0 0 0 -11.923112,3.57045 83.453544,83.453544 0 0 0 -11.176569,5.45665 l 5.17714,13.19801 c -3.237672,2.00679 -6.291014,4.27948 -9.146103,6.77482 l -11.145907,-8.89184 a 83.453544,83.453544 0 0 0 -15.470116,19.38858 l 11.122465,8.87201 c -1.795143,3.32515 -3.313038,6.82032 -4.553215,10.44264 l -13.998661,-2.11161 a 83.453544,83.453544 0 0 0 -3.703884,24.52606 l 14.016692,2.11342 c 0.114346,3.85867 0.532262,7.64047 1.253259,11.31722 l -13.15113,5.16091 a 83.453544,83.453544 0 0 0 3.566835,11.92131 83.453544,83.453544 0 0 0 5.46026,11.18018 l 13.199814,-5.17714 c 2.006922,3.23749 4.275765,6.29118 6.771223,9.1461 l -8.888244,11.14231 a 83.453544,83.453544 0 0 0 19.384985,15.47196 l 8.873814,-11.12432 c 3.324588,1.79482 6.819233,3.31322 10.440834,4.55322 l -2.113409,14.0005 a 83.453544,83.453544 0 0 0 24.526049,3.7002 l 2.11342,-14.0131 c 3.8604,-0.1141 7.64253,-0.5337 11.32082,-1.255 l 5.16091,13.1511 a 83.453544,83.453544 0 0 0 11.91772,-3.5686 83.453544,83.453544 0 0 0 11.18197,-5.4603 l -5.17895,-13.19619 c 1.78522,-1.10668 3.50957,-2.30125 5.18075,-3.56143 l -5.03107,-4.02486 c -0.007,0.006 -0.0143,0.011 -0.0216,0.0162 -1.70811,-1.36015 -3.41559,-2.72074 -5.12665,-4.07897 -9.00893,5.9893 -19.8279,9.48332 -31.48301,9.48332 -31.461328,0 -56.867411,-25.40428 -56.867411,-56.86561 0,-31.46133 25.406083,-56.86742 56.867411,-56.86742 31.46134,0 56.86562,25.40609 56.86562,56.86742 0,6.4369 -1.07914,12.61285 -3.04029,18.37696 1.72795,1.38161 3.4541,2.76373 5.18075,4.14568 -0.008,0.022 -0.0151,0.0448 -0.0235,0.0667 l 5.08517,4.06633 c 0.57372,-1.38271 1.11702,-2.78034 1.6049,-4.20518 l 13.99685,2.11161 a 83.453544,83.453544 0 0 0 3.70388,-24.52606 l -14.01308,-2.11341 c -0.1141,-3.85883 -0.53431,-7.64034 -1.25507,-11.31722 l 13.15114,-5.16272 a 83.453544,83.453544 0 0 0 -3.56864,-11.92311 83.453544,83.453544 0 0 0 -5.45665,-11.17657 l -13.19802,5.17534 c -2.0063,-3.23663 -4.27661,-6.29007 -6.77121,-9.14431 l 8.88643,-11.1423 a 83.453544,83.453544 0 0 0 -19.38501,-15.47196 l -8.8702,11.12066 c -3.32596,-1.79568 -6.82294,-3.31097 -10.44625,-4.55141 l 2.11341,-14.00047 a 83.453544,83.453544 0 0 0 -24.52425,-3.70388 z" id="path5082" inkscape:connector-curvature="0" /> <path style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#2c8596;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:8.6044426;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate" d="m 122.99998,825.4164 c -62.562065,0 -113.4719675,50.9099 -113.4719582,113.47195 0,62.56205 50.9097732,113.47385 113.4719582,113.47385 26.10145,0 50.17846,-8.8816 69.3646,-23.7451 -0.93166,-3.5168 -1.04987,-7.2555 -0.1096,-11.0496 -2.67184,-2.2508 -5.36246,-4.4886 -8.06612,-6.7175 -16.48312,14.0254 -37.79668,22.5005 -61.18888,22.5005 -52.288811,0 -94.460268,-42.17323 -94.460268,-94.46215 0,-52.28891 42.171328,-94.46027 94.460268,-94.46027 52.2889,0 94.46213,42.17148 94.46212,94.46027 0,15.18935 -3.58941,29.49123 -9.93127,42.1902 2.10474,1.68388 5.84054,4.68945 7.94399,6.37367 4.67593,-1.5838 5.63331,-1.49124 10.74819,-1.45635 6.57262,-14.35713 10.25077,-30.31701 10.25077,-47.10752 0,-62.56216 -50.91175,-113.47195 -113.4738,-113.47195 z" id="path4286" inkscape:connector-curvature="0" sodipodi:nodetypes="ssscccsssscccss" /> <rect style="opacity:1;fill:#2c8596;fill-opacity:1;stroke:none;stroke-width:10;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" id="rect4346" width="22.221884" height="15.872772" x="111.88996" y="814.95599" /> <rect style="opacity:1;fill:#2c8596;fill-opacity:1;stroke:none;stroke-width:14;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" id="rect4354" width="70.747231" height="19.589611" x="87.627289" y="796.36218" /> <rect style="opacity:1;fill:#2c8596;fill-opacity:1;stroke:none;stroke-width:14;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" id="rect4356" width="25.658739" height="36.435406" x="674.33521" y="505.26779" transform="matrix(0.77532397,0.63156373,-0.63156373,0.77532397,0,0)" /> <path sodipodi:nodetypes="cscscsccccccscsccccccccc" style="opacity:1;fill:#60605b;fill-opacity:1;stroke:none;stroke-width:3.41591263;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1" d="m 82.03727,919.18353 20.18111,18.38964 c 3.00512,2.73836 5.57396,0.38408 7.44031,-0.73815 1.86645,-1.12153 4.5992,-3.62938 6.34232,-5.80905 1.74312,-2.17963 3.59011,-5.09546 4.11626,-7.2684 0.52621,-2.17294 0.65325,-4.78572 -2.22022,-6.88306 L 96.428153,901.2046 c 11.602037,-4.97268 29.532987,-0.4384 39.133497,7.22633 10.06491,8.04898 9.9871,18.28968 4.6508,28.54206 l 72.73565,58.16943 c 6.46176,-4.53718 13.58784,-5.44005 20.77287,0.30624 7.62282,6.10684 14.27408,19.22174 12.44495,29.08964 l -15.90636,-13.9133 c -2.12895,-1.8622 -4.13286,-1.3068 -5.72628,-0.5189 -1.59343,0.7878 -3.53229,2.7301 -4.91822,4.4631 -1.38589,1.733 -2.85363,4.2912 -3.39738,5.9349 -0.54385,1.6438 -1.9211,4.0474 0.72789,5.8996 l 17.79101,12.4384 c -9.15424,4.0132 -23.48714,0.3482 -31.12041,-5.7502 -7.45926,-5.9652 -7.91464,-13.4444 -4.50594,-21.0341 l -72.54662,-58.01806 c -8.82866,7.45968 -18.80177,9.78657 -28.866438,1.7373 -9.593287,-7.67959 -18.053081,-24.25227 -15.660003,-36.59392 z" id="path5084" inkscape:connector-curvature="0" /> </g> </svg> 0707010000004A000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002400000000tuned-2.19.0.29+git.b894a3e/libexec0707010000004B000081ED0000000000000000000000016391BC3A00000E77000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/libexec/defirqaffinity.py#!/usr/bin/python3 # Helper script for realtime profiles provided by RT import os import sys irqpath = "/proc/irq/" def bitmasklist(line): fields = line.strip().split(",") bitmasklist = [] entry = 0 for i in range(len(fields) - 1, -1, -1): mask = int(fields[i], 16) while mask != 0: if mask & 1: bitmasklist.append(entry) mask >>= 1 entry += 1 return bitmasklist def get_cpumask(mask): groups = [] comma = 0 while mask: cpumaskstr = '' m = mask & 0xffffffff cpumaskstr += ('%x' % m) if comma: cpumaskstr += ',' comma = 1 mask >>= 32 groups.append(cpumaskstr) string = '' for i in reversed(groups): string += i return string def parse_def_affinity(fname): if os.getuid() != 0: return try: with open(fname, 'r') as f: line = f.readline() return bitmasklist(line) except IOError: return [ 0 ] def verify(shouldbemask): inplacemask = 0 fname = irqpath + "default_smp_affinity"; cpulist = parse_def_affinity(fname) for i in cpulist: inplacemask = inplacemask | 1 << i; if (inplacemask & ~shouldbemask): sys.stderr.write("verify: failed: irqaffinity (%s) inplacemask=%x shouldbemask=%x\n" % (fname, inplacemask, shouldbemask)) sys.exit(1) # now verify each /proc/irq/$num/smp_affinity interruptdirs = [ f for f in os.listdir(irqpath) if os.path.isdir(os.path.join(irqpath,f)) ] # IRQ 2 - cascaded signals from IRQs 8-15 (any devices configured to use IRQ 2 will actually be using IRQ 9) try: interruptdirs.remove("2") except ValueError: pass # IRQ 0 - system timer (cannot be changed) try: interruptdirs.remove("0") except ValueError: pass for i in interruptdirs: inplacemask = 0 fname = irqpath + i + "/smp_affinity" cpulist = parse_def_affinity(fname) for i in cpulist: inplacemask = inplacemask | 1 << i; if (inplacemask & ~shouldbemask): sys.stderr.write("verify: failed: irqaffinity (%s) inplacemask=%x shouldbemask=%x\n" % (fname, inplacemask, shouldbemask)) sys.exit(1) sys.exit(0) sys.stderr.write("defirqaffinity.py is deprecated. Use isolated_cores or other built-in functionality instead.\n") # adjust default_smp_affinity cpulist = parse_def_affinity(irqpath + "default_smp_affinity") mask = 0 for i in cpulist: mask = mask | 1 << i; if len(sys.argv) < 3 or len(str(sys.argv[2])) == 0: sys.stderr.write("%s: invalid arguments\n" % os.path.basename(sys.argv[0])) sys.exit(1) line = sys.argv[2] fields = line.strip().split(",") for i in fields: if sys.argv[1] == "add": mask = mask | 1 << int(i); elif sys.argv[1] == "remove" or sys.argv[1] == "verify": mask = mask & ~(1 << int(i)); if sys.argv[1] == "verify": verify(mask) string = get_cpumask(mask) fo = open(irqpath + "default_smp_affinity", "w") fo.write(string) fo.close() # now adjust each /proc/irq/$num/smp_affinity interruptdirs = [ f for f in os.listdir(irqpath) if os.path.isdir(os.path.join(irqpath,f)) ] # IRQ 2 - cascaded signals from IRQs 8-15 (any devices configured to use IRQ 2 will actually be using IRQ 9) try: interruptdirs.remove("2") except ValueError: pass # IRQ 0 - system timer (cannot be changed) try: interruptdirs.remove("0") except ValueError: pass ret = 0 for i in interruptdirs: fname = irqpath + i + "/smp_affinity" cpulist = parse_def_affinity(fname) mask = 0 for j in cpulist: mask = mask | 1 << j; for j in fields: if sys.argv[1] == "add": mask = mask | 1 << int(j); elif sys.argv[1] == "remove": mask = mask & ~(1 << int(j)); string = get_cpumask(mask) try: fo = open(fname, "w") fo.write(string) fo.close() except IOError as e: sys.stderr.write('Failed to set smp_affinity for IRQ %s: %s\n' % (str(i), str(e))) ret = 1 sys.exit(ret) 0707010000004C000081ED0000000000000000000000016391BC3A00000F04000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/libexec/pmqos-static.py#!/usr/bin/python3 # # pmqos-static.py: Simple daemon for setting static PM QoS values. It is a part # of 'tuned' and it should not be called manually. # # Copyright (C) 2011 Red Hat, Inc. # Authors: Jan Vcelak <jvcelak@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function import os import signal import struct import sys import time # Used interface is described in Linux kernel documentation: # Documentation/power/pm_qos_interface.txt ALLOWED_INTERFACES = [ "cpu_dma_latency", "network_latency", "network_throughput" ] PIDFILE = "/var/run/tuned/pmqos-static.pid" def do_fork(): pid = os.fork() if pid > 0: sys.exit(0) def close_fds(): f = open('/dev/null', 'w+') os.dup2(f.fileno(), sys.stdin.fileno()) os.dup2(f.fileno(), sys.stdout.fileno()) os.dup2(f.fileno(), sys.stderr.fileno()) def write_pidfile(): with os.fdopen(os.open(PIDFILE, os.O_CREAT | os.O_TRUNC | os.O_WRONLY, 0o644), "w") as f: f.write("%d" % os.getpid()) def daemonize(): do_fork() os.chdir("/") os.setsid() os.umask(0) do_fork() close_fds() def set_pmqos(name, value): filename = "/dev/%s" % name bin_value = struct.pack("i", int(value)) try: fd = os.open(filename, os.O_WRONLY) except OSError: print("Cannot open (%s)." % filename, file=sys.stderr) return None os.write(fd, bin_value) return fd def sleep_forever(): while True: time.sleep(86400) def sigterm_handler(signum, frame): global pmqos_fds if type(pmqos_fds) is list: for fd in pmqos_fds: os.close(fd) sys.exit(0) def run_daemon(options): try: daemonize() write_pidfile() signal.signal(signal.SIGTERM, sigterm_handler) except Exception as e: print("Cannot daemonize (%s)." % e, file=sys.stderr) return False global pmqos_fds pmqos_fds = [] for (name, value) in list(options.items()): try: new_fd = set_pmqos(name, value) if new_fd is not None: pmqos_fds.append(new_fd) except: # we are daemonized pass if len(pmqos_fds) > 0: sleep_forever() else: return False def kill_daemon(force = False): try: with open(PIDFILE, "r") as pidfile: daemon_pid = int(pidfile.read()) except IOError as e: if not force: print("Cannot open PID file (%s)." % e, file=sys.stderr) return False try: os.kill(daemon_pid, signal.SIGTERM) except OSError as e: if not force: print("Cannot terminate the daemon (%s)." % e, file=sys.stderr) return False try: os.unlink(PIDFILE) except OSError as e: if not force: print("Cannot delete the PID file (%s)." % e, file=sys.stderr) return False return True if __name__ == "__main__": disable = False options = {} for option in sys.argv[1:]: if option == "disable": disable = True break try: (name, value) = option.split("=") except ValueError: name = option value = None if name in ALLOWED_INTERFACES and len(value) > 0: options[name] = value else: print("Invalid option (%s)." % option, file=sys.stderr) if disable: sys.exit(0 if kill_daemon() else 1) if len(options) == 0: print("No options set. Not starting.", file=sys.stderr) sys.exit(1) kill_daemon(True) run_daemon(options) sys.exit(1) 0707010000004D000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002000000000tuned-2.19.0.29+git.b894a3e/man0707010000004E000081A40000000000000000000000016391BC3A0000033F000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/man/diskdevstat.8.TH "diskdevstat" "8" "13 Jan 2011" "Phil Knirsch" "Tool for recording harddisk activity" .SH NAME diskdevstat - tool for recording harddisk activity .SH SYNOPSIS \fBdiskdevstat\fP [\fIupdate interval\fP] [\fItotal duration\fP] [\fIdisplay histogram\fP] .SH DESCRIPTION \fBdiskdevstat\fR is a simple systemtap script to record harddisk activity of processes and display statistics for read/write operations. .TP update interval Sets sampling interval in seconds. .TP total duration Sets total measurement time in seconds. .TP display histogram If this parameter is present, the histogram will be shown at the end of the measurement. .SH "SEE ALSO" .LP tuned(8) netdevstat(8) varnetload(8) scomes(8) stap(1) .SH AUTHOR Written by Phil Knirsch <pknirsch@redhat.com>. .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 0707010000004F000081A40000000000000000000000016391BC3A0000033F000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/man/netdevstat.8.TH "netdevstat" "8" "13 Jan 2011" "Phil Knirsch" "Tool for recording network activity" .SH NAME netdevstat - tool for recording network activity .SH SYNOPSIS \fBnetdevstat\fP [\fIupdate interval\fP] [\fItotal duration\fP] [\fIdisplay histogram\fP] .SH DESCRIPTION \fBnetdevstat\fR is a simple systemtap script to record network activity of processes and display statistics for transmit/receive operations. .TP update interval Sets sampling interval in seconds. .TP total duration Sets total measurement time in seconds. .TP display histogram If this parameter is present, the histogram will be shown at the end of the measurement. .SH "SEE ALSO" .LP tuned(8) diskdevstat(8) varnetload(8) scomes(8) stap(1) .SH AUTHOR Written by Phil Knirsch <pknirsch@redhat.com>. .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000050000081A40000000000000000000000016391BC3A0000036F000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/man/scomes.8.TH "scomes" "8" "13 Jan 2011" "Phil Knirsch" "Tool for watching system resources" .SH NAME scomes - tool for watching system resources .SH SYNOPSIS \fBscomes\fP \-c "binary [binary arguments ...]" [timer] .SH DESCRIPTION \fBscomes\fR is a simple systemtap script for watching activity of one process. Syscalls count, userspace and kernelspace ticks, read and written bytes, transmitted and received bytes and polling syscalls are measured. .TP binary Binary file to be executed. This process will be watched. .TP timer Setting this option causes the script to print out statistic every N seconds. If not provided, the statistics are printed only when watched process terminates. .SH "SEE ALSO" .LP tuned(8) diskdevstat(8) netdevstat(8) varnetload(8) stap(1) .SH AUTHOR Written by Jan Hutař <jhutar@redhat.com>. .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000051000081A40000000000000000000000016391BC3A000012E0000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/man/tuned-adm.8.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jan Kaluža, Jan Včelák .\" * Marcela Mašláňová, Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_ADM "8" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-adm - command line tool for switching between different tuning profiles .SH SYNOPSIS .B tuned\-adm .RB [ list " | " active " | " "profile \fI[profile]\fP..." " | " "profile_info \fI[profile]\fP..." " | " off " | "auto_profile" | "profile_mode" | " verify " | " recommend ] .SH DESCRIPTION This command line utility allows you to switch between user definable tuning profiles. Several predefined profiles are already included. You can even create your own profile, either based on one of the existing ones by copying it or make a completely new one. The distribution provided profiles are stored in subdirectories below \fI/usr/lib/tuned\fP and the user defined profiles in subdirectories below \fI/etc/tuned\fP. If there are profiles with the same name in both places, user defined profiles have precedence. .SH "OPTIONS" .SS .TP .B list List all available profiles. .P .RS .B profiles List all available profiles. .P .B plugins List all available plugins. .RS .P .B -v, --verbose List plugin's configuration options and their hints. .RE .RE .TP .B active Show current active profile. .TP .BI "profile " [PROFILE_NAME] ... Switches to the given profile. If more than one profile is given, the profiles are merged (in case of conflicting settings, the setting from the last profile is used) and the resulting profile is applied. If no profile is given, then all available profiles are listed. If the profile given is not valid, the command gracefully exits without performing any operation. .TP .BI "profile_info " [PROFILE_NAME] ... Show information/description of given profile or current profile if no profile is specified. .TP .B verify Verifies current profile against system settings. Outputs information whether system settings match current profile or not (e.g. somebody modified a sysfs/sysctl value by hand). Detailed information about what is checked, what value is set and what value is expected can be found in the log. .TP .B recommend Recommend a profile suitable for your system. Currently only static detection is implemented - it decides according to data in \fI/etc/system\-release\-cpe\fP and virt\-what output. The rules for autodetection are defined in the file \fI/usr/lib/tuned/recommend.d/50-tuned.conf\fP. The default rules recommend profiles targeted to the best performance, or the balanced profile if unsure. The default rules can be overridden by the user by putting a file named \fIrecommend.conf\fP into /etc/tuned, or by creating a file in the \fI/etc/tuned/recommend.d\fP directory. The file \fI/etc/tuned/recommend.conf\fP is evaluated first. If no match is found, the files in the \fI/etc/tuned/recommend.d\fP directory are merged with the files in the \fI/usr/lib/tuned/recommend.d\fP directory (if there is a file with the same name in both directories, the one from \fI/etc/tuned/recommend.d\fP is used) and the files are evaluated in alphabetical order. The first matching entry is used. .TP .B auto_profile Enable automatic profile selection mode, switch to the recommended profile. .TP .B profile_mode Show current profile selection mode. .TP .B off Unload tunings. .SH "FILES" .nf /etc/tuned/* /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned.conf (5) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Jan Kaluža <jkaluza@redhat.com> Jan Včelák <jvcelak@redhat.com> Marcela Mašláňová <mmaslano@redhat.com> Phil Knirsch <pknirsch@redhat.com> 07070100000052000081A40000000000000000000000016391BC3A00000792000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/man/tuned-gui.8.\"/*. .\" * All rights reserved .\" * Copyright (C) 2019 Red Hat, Inc. .\" * Authors: Tomáš Korbař .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\". .TH "tuned-gui" "8" "22 Mar 2019" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-gui - graphical interface for configuration of TuneD .SH SYNOPSIS \fBtuned\-gui\fP .SH DESCRIPTION \fBtuned-gui\fP is graphical interface for configuration of TuneD daemon. GUI allows user to easily create, remove or update TuneD profiles and load those profiles into TuneD. It is also possible to explore all available plugins and information about them, like their description or help for their configuration parameters. .SH "TABS" .TP .B SUMMARY Display current active profile and configuration of plugins used by it. .TP .B PROFILES Display available profiles and their origin (System | User). Profiles can be managed by 'create new profile', 'Edit' and 'Remove' buttons. .TP .B PLUGINS Display available plugins and all accessible information about them. .SH "SEE ALSO" .LP tuned(8) tuned\-adm(8) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Marek Stana Tomáš Korbař <tkorbar@redhat.com> .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000053000081A40000000000000000000000016391BC3A0000108D000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/man/tuned-main.conf.5.TH "tuned-main.conf" "5" "15 Oct 2013" "Jaroslav Škarvada" "tuned-main.conf file format description" .SH NAME tuned\-main.conf - TuneD global configuration file .SH SYNOPSIS .B /etc/tuned/tuned\-main.conf .SH DESCRIPTION This man page documents format of the TuneD global configuration file. The \fItuned\-main.conf\fR file uses the ini\-file format. .TP .BI daemon= BOOL This defines whether TuneD will use daemon or not. It is boolean value. It can be \fBTrue\fR or \fB1\fR if the daemon is enabled and \fBFalse\fR or \fB0\fR if disabled. It is not recommended to disable daemon, because many functions will not work without daemon, e.g. there will be no D-Bus, no settings rollback, no hotplug support, no dynamic tuning, ... .TP .BI dynamic_tuning= BOOL This defines whether the dynamic tuning is enabled. It is boolean value. It can be \fBTrue\fR or \fB1\fR if the dynamic tuning is enabled and \fBFalse\fR or \fB0\fR if disabled. In such case only the static tuning will be used. Please note if it is enabled here, it is still possible to individually disable it in plugins. It is only applicable if \fBdaemon\fR is enabled. .TP .BI sleep_interval= INT TuneD daemon is periodically waken after \fIINT\fR seconds and checks for events. By default this is set to 1 second. If you have Python 2 interpreter with applied patch from Red Hat Bugzilla #917709 this controls responsiveness time of TuneD to commands (i.e. if you request profile switch, it may take up to 1 second until TuneD reacts). Increase this number for higher responsiveness times and more power savings (due to lower number of wakeups). In case you have unpatched Python 2 interpreter, this settings will have no visible effect, because the interpreter will poll 20 times per second. It is only applicable if \fBdaemon\fR is enabled. .TP .BI update_interval= INT Update interval for dynamic tuning (in seconds). TuneD daemon is periodically waken after \fIINT\fR seconds, updates its monitors, calculates new tuning parameters for enabled plugins and applies the changes. Plugins that have disabled dynamic tuning are not processed. By default the \fIINT\fR is set to 10 seconds. TuneD daemon doesn't periodically wake if dynamic tuning is globally disabled (see \fBdynamic_tuning\fR) or this setting set to 0. This must be multiple of \fBsleep_interval\fR. It is only applicable if \fBdaemon\fR is enabled. .TP .BI recommend_command= BOOL This controls whether recommend functionality will be enabled or not. It is boolean value. It can be \fBTrue\fR or \fB1\fR if the recommend command is enabled and \fBFalse\fR or \fB0\fR if disabled. If disabled \fBrecommend\fR command will be not available in CLI, TuneD will not parse \fIrecommend.conf\fR and will return one hardcoded profile (by default \fBbalanced\fR). It is only applicable if \fBdaemon\fR is enabled. By default it's set to \fBTrue\fR. .TP .BI reapply_sysctl= BOOL This controls whether to reapply sysctl settings from \fI/run/sysctl.d/*.conf\fR, \fI/etc/sysctl.d/*.conf\fR and \fI/etc/sysctl.conf\fR after TuneD sysctl settings are applied. These are locations supported by \fBsysctl --system\fR, excluding those that contain sysctl configuration files provided by system packages. So if \fBreapply_sysctl\fR is set to \fBTrue\fR or \fB1\fR, TuneD sysctl settings will not override user-provided system sysctl settings. If set to \fBFalse\fR or \fB0\fR, TuneD sysctl settings will override system sysctl settings. By default it's set to \fBTrue\fR. .TP .BI default_instance_priority= INT Default instance (unit) priority. By default it's \fB0\fR. Each unit has a priority which is by default preset to the \fIINT\fR. It can be overridden in the TuneD profile by the \fBpriority\fR option. TuneD units are processed in order defined by their priorities, i.e. unit with the lowest number is processed as the first. .SH EXAMPLE .nf no_daemon = 0 dynamic_tuning = 1 sleep_interval = 1 update_interval = 10 recommend_command = 0 reapply_sysctl = 1 default_instance_priority = 0 .fi .SH FILES .I /etc/tuned/tuned\-main.conf .SH "SEE ALSO" .LP tuned(8) .SH AUTHOR Written by Jaroslav Škarvada <jskarvad@redhat.com>. .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000054000081A40000000000000000000000016391BC3A000008D2000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-atomic.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2014-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_ATOMIC "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-atomic - description of profiles provided for the Project Atomic .SH DESCRIPTION These profiles are provided for the Project Atomic. They provides performance optimizations for the Atomic hosts (bare metal) and virtual guests. .SH PROFILES The following profiles are provided: .TP .BI "atomic\-host" Profile optimized for Atomic hosts (bare metal). It is based on throughput\-performance profile. It additionally increases SELinux AVC cache, PID limit and tunes netfilter connections tracking. .TP .BI "atomic\-guest" Profile optimized for virtual Atomic guests. It is based on virtual\-guest profile. It additionally increases SELinux AVC cache, PID limit and tunes netfilter connections tracking. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 07070100000055000081A40000000000000000000000016391BC3A00000E3C000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-compat.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jan Kaluža, Jan Včelák, .\" * Marcela Mašláňová, Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_COMPAT "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-compat - description of profiles provided for backward compatibility .SH DESCRIPTION These profiles are provided for backward compatibility with the tuned-1x. They are no longer maintained and may be dropped anytime in the future. It's not recommended to use them for purposes other than backward compatibility. Some of them are only aliases to base TuneD profiles. Please do not use the compat profiles on new installations and rather use profiles from base tuned package or other tuned subpackages. .SH PROFILES The following profiles are considered compatibility profiles: .TP .BI "default" It is the lowest of the available profiles in regard to power saving and only enables CPU and disk plugins of TuneD. .TP .BI "desktop\-powersave" A power saving profile directed at desktop systems. Enables ALPM power saving for SATA host adapters as well as the CPU, ethernet and disk plugins of TuneD. .TP .BI server\-powersave A power saving profile directed at server systems. Enables ALPM power saving for SATA host adapters, and activates the CPU and disk plugins of TuneD. .TP .BI laptop\-ac\-powersave Medium power saving profile directed at laptops running on AC. Enables ALPM power saving for SATA host adapters, WiFi power saving as well as CPU, ethernet and disk plugins of TuneD. .TP .BI laptop\-battery\-powersave Strong power saving profile directed at laptops running on battery. Currently an alias to powersave profile. .TP .BI "spindown\-disk" Strong power saving profile directed at machines with classic HDDs. It enables aggressive disk spin-down. Disk writeback values are increased and disk swappiness is lowered. Log syncing is disabled. All partitions are remounted with 'noatime' option. .TP .BI "enterprise\-storage" Server profile for high disk throughput performance tuning. Disables power saving mechanisms and enables hugepages. Disk readahead values are increased. CPU governor is set to performance. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Jan Kaluža <jkaluza@redhat.com> Jan Včelák <jvcelak@redhat.com> Marcela Mašláňová <mmaslano@redhat.com> Phil Knirsch <pknirsch@redhat.com> 07070100000056000081A40000000000000000000000016391BC3A0000097A000000000000000000000000000000000000004C00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-cpu-partitioning-powersave.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2022 Red Hat, Inc. .\" * Authors: Christophe Fontaine .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_CPU_PARTITIONING "7" "22 Nov 2022" "TuneD" .SH NAME tuned\-profiles\-cpu\-partitioning\-powersave - Partition CPUs into isolated and housekeeping with C-States enabled .SH DESCRIPTION The cpu\-partitioning\-powersave profile is similar to cpu\-partitioning profile, but gives more flexibility on the C-States configuration. .SH CONFIGURATION The cpu-partitioning-powersave profile is configured by editing the .I /etc/tuned/cpu-partitioning-powersave-variables.conf file. There are three configuration options: .TP .B isolated_cores=<CPU\-LIST> List of CPUs to isolate. This option is mandatory. Any CPUs not in this list is automatically considered a housekeeping CPU. .TP .B no_balance_cores=<CPU\-LIST> List of CPUs not be considered by the kernel when doing system wide process load\-balancing. Usually, this list should be the same as isolated_cores=. This option is optional. .TP .B max_power_state=<MAX_CSTATE> Maximum c-state the cores are allowed to enter. Can be expressed as it's name (C1E) or minimum wake-up latency, in micro-seconds. This parameter is provided as-is to `force_latency`. Default is set to "cstate.name:C1|10" to behave as cpu\-partitioning profile. .SH IMPORTANT NOTES .IP * Same recommendations as tuned\-profiles\-cpu\-partitioning (7) apply. .SH "FILES" .nf .I /etc/tuned/cpu\-partitioning\-variables.conf .I /etc/tuned/tuned\-main.conf .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles\-cpu\-partitioning (7) .SH AUTHOR .nf Christophe Fontaine <cfontain@redhat.com> 07070100000057000081A40000000000000000000000016391BC3A00001236000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-cpu-partitioning.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Luiz Capitulino .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_CPU_PARTITIONING "7" "22 Feb 2018" "TuneD" .SH NAME tuned\-profiles\-cpu\-partitioning - Partition CPUs into isolated and housekeeping. .SH DESCRIPTION The cpu\-partitioning profile partitions the system CPUs into isolated and housekeeping CPUs. This profile is intended to be used for latency\-sensitive workloads. An isolated CPU incurs reduced jitter and reduced interruptions by the kernel. This is achieved by clearing the CPU from user\-space processes, movable kernel threads, interruption handlers, kernel timers, etc. The only fixed source of interruptions is the 1Hz tick maintained by the kernel to keep CPU usage statistics. Otherwise, the incurred jitter and interruptions, if any, depend on the kernel services used by the thread running on the isolated CPU. Threads that run a busy loop without doing system calls, such as user\-space drivers that access the hardware directly, are only expected to be interrupted once a second by the 1Hz tick. A housekeeping CPU is the opposite of an isolated CPU. Housekeeping CPUs run all daemons, shell processes, kernel threads, interruption handlers and work that can be dispatched from isolated CPUs such as disk I/O, RCU work, timers, etc. .SH CONFIGURATION The cpu-partitioning profile is configured by editing the .I /etc/tuned/cpu-partitioning-variables.conf file. There are two configuration options: .TP .B isolated_cores=<CPU\-LIST> List of CPUs to isolate. This option is mandatory. Any CPUs not in this list is automatically considered a housekeeping CPU. .TP .B no_balance_cores=<CPU\-LIST> List of CPUs not be considered by the kernel when doing system wide process load\-balancing. Usually, this list should be the same as isolated_cores=. This option is optional. .SH IMPORTANT NOTES .IP * 2 The system should be rebooted after applying the cpu\-partitioning profile for the first time or changing its configuration .IP * The cpu\-partitioning profile can be used in bare\-metal and virtual machines .IP * When using the cpu\-partitioning profile in bare\-metal, it is strongly recommended to "mask" the ksm and ksmtuned services in systemd (if they are installed). This can be done with the following command: # systemctl mask ksm ksmtuned .IP * The cpu\-partitioning profile does not use the kernel's isolcpus= feature .IP * On a NUMA system, it is recommended to have at least one housekeeping CPU per NUMA node .IP * The cpu\-partitioning profile does not support isolating the L3 cache. This means that a housekeeping CPU can still thrash cache entries pertaining to isolated CPUs. It is recommended to use cache isolation technologies to remedy this problem, such as Intel's Cache Allocation Technology .IP * Whether or not the kernel is going to be able to deactivate the tick on isolated CPUs depend on a few factors concerning the running thread behavior. Please, consult the nohz_full documentation in the kernel to learn more .IP * The Linux real\-time project has put together a document on the best practices for writing real\-time applications. Even though the cpu\-partitioning profile does not guarantee real\-time response time, much of the techniques for writing real\-time applications also apply for applications intended to run under the cpu\-partitioning profile. Please, refer to this document at .I https://rt.wiki.kernel.org .SH "FILES" .nf .I /etc/tuned/cpu\-partitioning\-variables.conf .I /etc/tuned/tuned\-main.conf .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Luiz Capitulino <lcapitulino@redhat.com> Andrew Theurer <atheurer@redhat.com> 07070100000058000081A40000000000000000000000016391BC3A000006B1000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-mssql.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2018 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_MSSQL "7" "05 Jun 2018" "Red Hat, Inc." "TuneD" .SH NAME tuned\-profiles\-mssql - description of profile provided for the MS SQL Server .SH DESCRIPTION This profile is provided for the MS SQL Server. It's based on the throughput-performance profile. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 07070100000059000081A40000000000000000000000016391BC3A00000744000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-nfv-guest.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_NFV_GUEST "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-nfv\-guest - description of profile provided for the NFV guest .SH DESCRIPTION The profile is provided for the Network Function Virtualization (NFV) guest. .SH PROFILES The following profile is provided: .TP .BI "realtime\-virtual\-guest" Profile optimized for virtual guests based on realtime profile. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 0707010000005A000081A40000000000000000000000016391BC3A0000073F000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-nfv-host.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_NFV_HOST "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-nfv\-host - description of profile provided for the NFV host .SH DESCRIPTION The profile is provided for the Network Function Virtualization (NFV) host. .SH PROFILES The following profile is provided: .TP .BI "realtime\-virtual\-host" Profile optimized for virtual hosts based on realtime profile. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 0707010000005B000081A40000000000000000000000016391BC3A00000826000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-openshift.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2018-2021 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jiří Mencák .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_OPENSHIFT "8" "02 Aug 2021" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-openshift - description of profiles provided for the OpenShift platform .SH DESCRIPTION These profiles are provided for the OpenShift platform. .SH PROFILES The following profiles are provided: .TP .BI "openshift" Parent profile containing tuning shared by OpenShift control plane and worker nodes. .TP .BI "openshift\-control\-plane" Profile optimized for OpenShift control plane. .TP .BI "openshift\-node" Profile optimized for general workloads on OpenShift worker nodes. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Jiří Mencák <jmencak@redhat.com> 0707010000005C000081A40000000000000000000000016391BC3A00000794000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-oracle.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_ORACLE "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-oracle - description of profiles provided for the Oracle .SH DESCRIPTION These profiles are provided for the Oracle loads. .SH PROFILES The following profiles are provided: .TP .BI "oracle" Profile optimized for Oracle databases based on throughput\-performance profile. It additionally disables transparent huge pages and modifies some other performance related kernel parameters. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 0707010000005D000081A40000000000000000000000016391BC3A00000839000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-postgresql.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2020 Red Hat, Inc. .\" * Copyright (C) 2020 Deutsches Elektronen-Synchrotron DESY .\" * Authors: Jaroslav Škarvada .\" * Tigran Mkrtchyan .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_POSTGRESQL "7" "29 Jul 2020" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-postgresql - description of profiles provided for PostgreSQL Server .SH DESCRIPTION These profiles are provided for the PostgreSQL Database server. .SH PROFILES The following profiles are provided: .TP .BI "postgresql" Profile optimized for PostgreSQL databases based on throughput\-performance profile. It additionally disables transparent huge pages and modifies some other performance related kernel parameters. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Tigran Mkrtchyan <tigran.mkrtchyan@desy.de> 0707010000005E000081A40000000000000000000000016391BC3A000006F6000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-realtime.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2015-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_REALTIME "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-realtime - description of profiles provided for the realtime .SH DESCRIPTION These profiles are provided for the realtime. .SH PROFILES The following profiles are provided: .TP .BI "realtime" Profile optimized for realtime. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 0707010000005F000081A40000000000000000000000016391BC3A00000887000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-sap-hana.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_SAP_HANA "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-sap\-hana - description of profiles provided for the SAP HANA .SH DESCRIPTION These profiles provides performance optimizations for the SAP HANA applications. .SH PROFILES The following profiles are provided: .TP .BI "sap\-hana" A performance optimized profile for the SAP HANA applications. It disables power saving mechanisms and enables sysctl settings that improve throughput performance of disk and network IO. CPU governor is set to performance and CPU energy performance bias is set to performance. It also disables transparent hugepages, locks CPU to the low C states (by PM QoS) and tunes sysctl regarding semaphores. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 07070100000060000081A40000000000000000000000016391BC3A000007FD000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-sap.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_SAP "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-sap - description of profiles provided for the SAP NetWeaver .SH DESCRIPTION These profiles provides performance optimizations for the SAP NetWeaver applications. .SH PROFILES The following profiles are provided: .TP .BI "sap\-netweaver" A performance optimized profile for the SAP NetWeaver applications. It is based on throughput\-performance profile. It additionally tunes sysctl settings regarding shared memory, semaphores and maximum number of memory map areas a process may have. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> 07070100000061000081A40000000000000000000000016391BC3A00000836000000000000000000000000000000000000004300000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles-spectrumscale-ece.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2020 International Business Machines Corporation .\" * Authors: Luis Bolinches .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES_SPECTRUMSCALE_ECE "7" "22 Apr 2020" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles\-spectrumscale-ece - description of profiles provided for Spectrum Scale Erasure Code Edition .SH DESCRIPTION These profiles are provided for Spectrum Scale Erasure Code Edition servers. .SH PROFILES The following profiles are provided: .TP .BI "spectrumscale-ece" Profile optimized for Spectrum Scale Erasure Code Edition based on throughput\-performance profile. It additionally enables automatic NUMA balancing and modifies some other performance related kernel parameters. .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap\-hana (7) .BR tuned\-profiles\-mssql (7) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Luis Bolinches <luis.bolinches@fi.ibm.com> 07070100000062000081A40000000000000000000000016391BC3A00001AF6000000000000000000000000000000000000003100000000tuned-2.19.0.29+git.b894a3e/man/tuned-profiles.7.\"/* .\" * All rights reserved .\" * Copyright (C) 2009-2017 Red Hat, Inc. .\" * Authors: Jaroslav Škarvada, Jan Kaluža, Jan Včelák, .\" * Marcela Mašláňová, Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\" .TH TUNED_PROFILES "7" "30 Mar 2017" "Fedora Power Management SIG" "TuneD" .SH NAME tuned\-profiles - description of basic TuneD profiles .SH DESCRIPTION These are the base profiles which are mostly shipped in the base tuned package. They are targeted to various goals. Mostly they provide performance optimizations but there are also profiles targeted to low power consumption, low latency and others. You can mostly deduce the purpose of the profile by its name or you can see full description below. The profiles are stored in subdirectories below \fI/usr/lib/tuned\fP. If you need to customize the profiles, you can copy them to \fI/etc/tuned\fP and modify them as you need. When loading profiles with the same name, the /etc/tuned takes precedence. In such case you will not lose your customized profiles between TuneD updates. The power saving profiles contain settings that are typically not enabled by default as they will noticeably impact the latency/performance of your system as opposed to the power saving mechanisms that are enabled by default. On the other hand the performance profiles disable the additional power saving mechanisms of TuneD as they would negatively impact throughput or latency. .SH PROFILES At the moment we're providing the following pre\-defined profiles: .TP .BI "balanced" It is the default profile. It provides balanced power saving and performance. At the moment it enables CPU and disk plugins of TuneD and it makes sure the conservative governor is active (if supported by the current cpufreq driver). It enables ALPM power saving for SATA host adapters and sets the link power management policy to medium_power. It also sets the CPU energy performance bias to normal. It also enables AC97 audio power saving or (it depends on your system) HDA\-Intel power savings with 10 seconds timeout. In case your system contains supported Radeon graphics card (with enabled KMS) it configures it to automatic power saving. .TP .BI "powersave" Maximal power saving, at the moment it enables USB autosuspend (in case environment variable USB_AUTOSUSPEND is set to 1), enables ALPM power saving for SATA host adapters and sets the link power management policy to min_power. It also enables WiFi power saving and makes sure the ondemand governor is active (if supported by the current cpufreq driver). It sets the CPU energy performance bias to powersave. It also enables AC97 audio power saving or (it depends on your system) HDA\-Intel power savings (with 10 seconds timeout). In case your system contains supported Radeon graphics card (with enabled KMS) it configures it to automatic power saving. On Asus Eee PCs dynamic Super Hybrid Engine is enabled. .TP .BI "throughput\-performance" Profile for typical throughput performance tuning. Disables power saving mechanisms and enables sysctl settings that improve the throughput performance of your disk and network IO. CPU governor is set to performance and CPU energy performance bias is set to performance. Disk readahead values are increased. .TP .BI "accelerator\-performance" This profile contains the same tuning as the throughput\-performance profile. Additionally, it locks the CPU to low C states so that the latency is less than 100us. This improves the performance of certain accelerators, such as GPUs. .TP .BI "latency\-performance" Profile for low latency performance tuning. Disables power saving mechanisms. CPU governor is set to performance and locked to the low C states (by PM QoS). CPU energy performance bias to performance. .TP .BI "network\-throughput" Profile for throughput network tuning. It is based on the throughput\-performance profile. It additionally increases kernel network buffers. .TP .BI "network\-latency" Profile for low latency network tuning. It is based on the latency\-performance profile. It additionally disables transparent hugepages, NUMA balancing and tunes several other network related sysctl parameters. .TP .BI "desktop" Profile optimized for desktops based on balanced profile. It additionally enables scheduler autogroups for better response of interactive applications. .TP .BI "hpc\-compute" Profile optimized for high\-performance computing. It is based on the latency\-performance profile. .TP .BI "virtual\-guest" Profile optimized for virtual guests based on throughput\-performance profile. It additionally decreases virtual memory swappiness and increases dirty_ratio settings. .TP .BI "virtual\-host" Profile optimized for virtual hosts based on throughput\-performance profile. It additionally enables more aggressive writeback of dirty pages. .TP .BI "intel\-sst" Profile optimized for systems with user-defined Intel Speed Select Technology configurations. This profile is intended to be used as an overlay on other profiles (e.g. cpu\-partitioning profile), example: .B tuned\-adm profile cpu\-partitioning intel\-sst .TP .BI "optimize\-serial\-console" Profile which tunes down I/O activity to the serial console by reducing the printk value. This should make the serial console more responsive. This profile is intended to be used as an overlay on other profiles (e.g. throughput\-performance profile), example: .B tuned\-adm profile throughput\-performance optimize\-serial\-console .SH "FILES" .nf .I /etc/tuned/* .I /usr/lib/tuned/* .SH "SEE ALSO" .BR tuned (8) .BR tuned\-adm (8) .BR tuned\-profiles\-atomic (7) .BR tuned\-profiles\-sap (7) .BR tuned\-profiles\-sap-hana (7) .BR tuned\-profiles\-oracle (7) .BR tuned\-profiles\-realtime (7) .BR tuned\-profiles\-nfv\-host (7) .BR tuned\-profiles\-nfv\-guest (7) .BR tuned\-profiles\-cpu\-partitioning (7) .BR tuned\-profiles\-compat (7) .BR tuned\-profiles\-postgresql (7) .BR tuned\-profiles\-openshift (7) .SH AUTHOR .nf Jaroslav Škarvada <jskarvad@redhat.com> Jan Kaluža <jkaluza@redhat.com> Jan Včelák <jvcelak@redhat.com> Marcela Mašláňová <mmaslano@redhat.com> Phil Knirsch <pknirsch@redhat.com> 07070100000063000081A40000000000000000000000016391BC3A00000A17000000000000000000000000000000000000002800000000tuned-2.19.0.29+git.b894a3e/man/tuned.8.\"/*. .\" * All rights reserved .\" * Copyright (C) 2009-2013 Red Hat, Inc. .\" * Authors: Jan Kaluža, Jan Včelák, Jaroslav Škarvada, .\" * Phil Knirsch .\" * .\" * This program is free software; you can redistribute it and/or .\" * modify it under the terms of the GNU General Public License .\" * as published by the Free Software Foundation; either version 2 .\" * of the License, or (at your option) any later version. .\" * .\" * This program is distributed in the hope that it will be useful, .\" * but WITHOUT ANY WARRANTY; without even the implied warranty of .\" * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the .\" * GNU General Public License for more details. .\" * .\" * You should have received a copy of the GNU General Public License .\" * along with this program; if not, write to the Free Software .\" * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. .\" */ .\". .TH "tuned" "8" "28 Mar 2012" "Fedora Power Management SIG" "Adaptive system tuning daemon" .SH NAME TuneD - dynamic adaptive system tuning daemon .SH SYNOPSIS \fBtuned\fP [\fIoptions\fP] .SH DESCRIPTION \fBtuned\fR is a dynamic adaptive system tuning daemon that tunes system settings dynamically depending on usage. .SH OPTIONS .TP 12 .BI \-d "\fR, \fP" \-\-daemon This options starts \fBtuned\fP as a daemon as opposed to in the foreground without forking at startup. .TP 12 .BI \-D "\fR, \fP" \-\-debug Sets the highest logging level. This could be very useful when having trouble with \fBtuned\fP. .TP 12 .BI \-h "\fR, \fP" \-\-help Show this help. .TP 12 .BI \-l " \fR[" \fILOG "\fR], " \fB\-\-log \fR[ \fB=\fILOG\fR]\fP Log to the file \fILOG\fP. If no \fILOG\fP file is specified \fB/var/log/tuned/tuned.log\fP is used. .TP 12 .BI \--no-dbus Do not attach to DBus. .TP 12 .BI \-P " \fR[" \fIPID "\fR], " \fB\-\-pid \fR[ \fB=\fIPID\fR]\fP Write process ID to the \fBPID\fP file. If no \fIPID\fP file is specified \fB/run/tuned/tuned.pid\fP is used. .TP 12 .BI \-p "\fR \fP" \fIPROFILE\fP "\fR, \fP" \-\-profile "\fR \fP" \fIPROFILE\fP Tuning profile to be activated. It will override other settings (e.g. from \fBtuned-adm\fP). This is intended for debugging purposes. .TP 12 .BI \-v "\fR, \fP" \-\-version Show version information. .SH "FILES" .nf /etc/tuned .SH "SEE ALSO" .LP tuned.conf(5) tuned\-adm(8) .SH AUTHOR .nf Jan Kaluža <jkaluza@redhat.com> Jan Včelák <jvcelak@redhat.com> Jaroslav Škarvada <jskarvad@redhat.com> Phil Knirsch <pknirsch@redhat.com> .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000064000081A40000000000000000000000016391BC3A00000B4E000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/man/tuned.conf.5.TH "tuned.conf" "5" "13 Mar 2012" "Jan Kaluža" "tuned.conf file format description" .SH NAME tuned.conf - TuneD profile definition .SH DESCRIPTION This man page documents format of TuneD 2.0 profile definition files. The profile definition is stored in /etc/tuned/<profile_name>/tuned.conf or in /usr/lib/tuned/<profile_name>/tuned.conf file where the /etc/tuned/ directory has higher priority. The \fBtuned.conf\fR configures the profile and it is in ini-file format. .SH MAIN SECTION The main section is called "[main]" and can contain following options: .TP include= Includes a profile with the given name. This allows you to base a new profile on an already existing profile. In case there are conflicting parameters in the new profile and the base profile, the parameters from the new profile are used. .SH PLUGINS Every other section defines one plugin. The name of the section is used as name for the plugin and is used in logs to identify the plugin. There can be only one plugin of particular type tuning particular device. Conflicts are by default fixed by merging the options of both plugins together. This can be changed by "replace" option. Every plugin section can contain following sections: .TP type= Plugin type. Currently there are following upstream plugins: audio, bootloader, cpu, disk, eeepc_she, modules, mounts, net, script, scsi_host, selinux, scheduler, sysctl, sysfs, systemd, usb, video, vm. This list may be incomplete. If you installed TuneD through RPM you can list upstream plugins by the following command: .B rpm -ql tuned | grep 'plugins/plugin_.*.py$' Check the plugins directory returned by this command to see all plugins (e.g. plugins provided by 3rd party packages). .TP devices= Comma separated list of devices which should be tuned by this plugin instance. If you omit this option, all found devices will be tuned. .TP replace=1 If there is conflict between two plugins (meaning two plugins of the same type are trying to configure the same devices), then the plugin defined as last replaces all options defined by the previously defined plugin. .LP Plugins can also have plugin related options. .SH "EXAMPLE" .nf [main] # Includes plugins defined in "included" profile. include=included # Define my_sysctl plugin [my_sysctl] type=sysctl # This plugin will replace any sysctl plugin defined in "included" profile replace=1 # 256 KB default performs well experimentally. net.core.rmem_default = 262144 net.core.wmem_default = 262144 # Define my_script plugin # Both scripts (profile.sh from this profile and script from "included" # profile) will be run, because if there is no "replace=1" option the # default action is merge. [my_script] type=script script=${i:PROFILE_DIR}/profile.sh .fi .SH "SEE ALSO" .LP tuned(8) .SH AUTHOR Written by Jan Kaluža <jkaluza@redhat.com>. .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000065000081A40000000000000000000000016391BC3A0000047C000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/man/varnetload.8.TH "varnetload" "8" "13 Jan 2011" "Phil Knirsch" "Tool to create reproducible network traffic" .SH NAME varnetload - tool to create reproducible network traffic .SH SYNOPSIS \fBvarnetload\fP [\fI\-d delay\fP] [\fI\-t time to run\fP] [\fI\-u url\fP] .SH DESCRIPTION \fBvarnetload\fR is a simple python script to create reproducible sustained network traffic. In order to use it effectively, you need to have an HTTP server present in your local LAN where you can put files. Upload a large HTML (or any other kind of file) to the HTTP server. Use the -u option of the script to point to that URL. Play with the delay option to vary the load put on your network. .TP delay Sets delay between individual downloads in milliseconds. Default value is 1000. But you my find values range from 0 to 500 more useful. .TP time to run Sets total run time in seconds. Default value is 60. .TP url Sets downloaded resource. Default is http://myhost.mydomain/index.html. .SH "SEE ALSO" .LP tuned(8) diskdevstat(8) netdevstat(8) scomes(8) .SH AUTHOR Written by Phil Knirsch <pknirsch@redhat.com>. .SH REPORTING BUGS Report bugs to https://bugzilla.redhat.com/. 07070100000066000081A40000000000000000000000016391BC3A000002A2000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/modules.conf# This file specifies additional parameters to kernel modules added by TuneD. # Its content is set by the TuneD modules plugin. # # Please do not edit this file. Content of this file can be overwritten by # switch of TuneD profile. # # If you need to add kernel module parameter which should be handled by TuneD, # create TuneD profile containing the following: # # [modules] # MODULE_NAME = MODULE_PARAMETERS # # Then switch to your newly created profile by: # # tuned-adm profile YOUR_NEW_PROFILE # # and reboot or reload the module # # TuneD tries to automatically reload the module if specified the following # way: # # [modules] # MODULE_NAME = +r,MODULE_PARAMETERS # 07070100000067000041ED0000000000000000000000266391BC3A00000000000000000000000000000000000000000000002500000000tuned-2.19.0.29+git.b894a3e/profiles07070100000068000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/profiles/accelerator-performance07070100000069000081A40000000000000000000000016391BC3A0000084C000000000000000000000000000000000000004800000000tuned-2.19.0.29+git.b894a3e/profiles/accelerator-performance/tuned.conf# # tuned configuration # [main] summary=Throughput performance based tuning with disabled higher latency STOP states [cpu] governor=performance energy_perf_bias=performance min_perf_pct=100 force_latency=99 [disk] readahead=>4096 [sysctl] # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio = 40 # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio = 10 # PID allocation wrap value. When the kernel's next PID value # reaches this value, it wraps back to a minimum PID value. # PIDs of value pid_max or larger are not allocated. # # A suggested value for pid_max is 1024 * <# of cpu cores/threads in system> # e.g., a box with 32 cpus, the default of 32768 is reasonable, for 64 cpus, # 65536, for 4096 cpus, 4194304 (which is the upper limit possible). #kernel.pid_max = 65536 # The swappiness parameter controls the tendency of the kernel to move # processes out of physical memory and onto the swap disk. # 0 tells the kernel to avoid swapping processes out of physical memory # for as long as possible # 100 tells the kernel to aggressively swap processes out of physical memory # and move them to swap cache vm.swappiness=10 [scheduler] # ktune sysctl settings for rhel6 servers, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) sched_min_granularity_ns = 10000000 # SCHED_OTHER wake-up granularity. # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) # # This option delays the preemption effects of decoupled workloads # and reduces their over-scheduling. Synchronous workloads will still # have immediate wakeup/sleep latencies. sched_wakeup_granularity_ns = 15000000 0707010000006A000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/profiles/atomic-guest0707010000006B000081A40000000000000000000000016391BC3A00000127000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/profiles/atomic-guest/tuned.conf# # tuned configuration # [main] summary=Optimize virtual guests based on the Atomic variant include=virtual-guest [selinux] avc_cache_threshold=65536 [net] nf_conntrack_hashsize=1048576 [sysctl] kernel.pid_max=131072 net.netfilter.nf_conntrack_max=1048576 fs.inotify.max_user_watches=65536 0707010000006C000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003100000000tuned-2.19.0.29+git.b894a3e/profiles/atomic-host0707010000006D000081A40000000000000000000000016391BC3A00000133000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/profiles/atomic-host/tuned.conf# # tuned configuration # [main] summary=Optimize bare metal systems running the Atomic variant include=throughput-performance [selinux] avc_cache_threshold=65536 [net] nf_conntrack_hashsize=1048576 [sysctl] kernel.pid_max=131072 net.netfilter.nf_conntrack_max=1048576 fs.inotify.max_user_watches=65536 0707010000006E000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/profiles/balanced0707010000006F000081A40000000000000000000000016391BC3A00000175000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/profiles/balanced/tuned.conf# # tuned configuration # [main] summary=General non-specialized tuned profile [modules] cpufreq_conservative=+r [cpu] priority=10 governor=conservative|powersave energy_perf_bias=normal [audio] timeout=10 [video] radeon_powersave=dpm-balanced, auto [disk] # Comma separated list of devices, all devices if commented out. # devices=sda [scsi_host] alpm=medium_power 07070100000070000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning07070100000071000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000004000000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning-powersave07070100000072000081A40000000000000000000000016391BC3A000001D5000000000000000000000000000000000000006A00000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning-powersave/cpu-partitioning-powersave-variables.conf# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # # Reserve 1 core per socket for housekeeping, isolate the rest. isolated_cores=${f:calc_isolated_cores:1} # To disable the kernel load balancing in certain isolated CPUs: # no_balance_cores=5-10 # Specifies the maximum powerstate for idling cores. # given to force_latency tuned parameter. To have the same behavior # as cpu-partitioning profile, set to "cstate.name:C1|10" max_power_state=cstate.name:C1|10 07070100000073000081A40000000000000000000000016391BC3A000004AB000000000000000000000000000000000000004B00000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning-powersave/tuned.conf# tuned configuration # [main] summary=Optimize for CPU partitioning with additional powersave include=cpu-partitioning [variables] # User is responsible for updating variables.conf with variable content such as isolated_cores=X-Y include=/etc/tuned/cpu-partitioning-powersave-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Make sure isolated_cores is defined before any of the variables that # use it (such as assert1) are defined, so that child profiles can set # isolated_cores directly in the profile (tuned.conf) isolated_cores = ${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} max_power_state_assert_check = \\${max_power_state} max_power_state = ${max_power_state} # Fail if max_power_state is not set assert2=${f:assertion_non_equal:max_power_state is set:${max_power_state}:${max_power_state_assert_check}} [cpu] force_latency=${max_power_state} no_turbo=true [bootloader] cmdline_cpu_part=+nohz=on${cmd_isolcpus} nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} tuned.non_isolcpus=${not_isolated_cpumask} intel_pstate=passive nosoftlockup 07070100000074000081ED0000000000000000000000016391BC3A000001DE000000000000000000000000000000000000004B00000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning/00-tuned-pre-udev.sh#!/bin/sh type getargs >/dev/null 2>&1 || . /lib/dracut-lib.sh cpumask="$(getargs tuned.non_isolcpus)" files=$(echo /sys/devices/virtual/workqueue{/,/*/}cpumask) log() { echo "tuned: $@" >> /dev/kmsg } if [ -n "$cpumask" ]; then log "setting workqueues CPU mask to $cpumask" for f in $files; do if [ -f $f ]; then if ! echo $cpumask > $f 2>/dev/null; then log "ERROR: could not write workqueue CPU mask '$cpumask' to '$f'" fi fi done fi 07070100000075000081A40000000000000000000000016391BC3A000000FF000000000000000000000000000000000000005600000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning/cpu-partitioning-variables.conf# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # # Reserve 1 core per socket for housekeeping, isolate the rest. isolated_cores=${f:calc_isolated_cores:1} # To disable the kernel load balancing in certain isolated CPUs: # no_balance_cores=5-10 07070100000076000081ED0000000000000000000000016391BC3A000001F7000000000000000000000000000000000000004000000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning/script.sh#!/bin/sh . /usr/lib/tuned/functions start() { mkdir -p "${TUNED_tmpdir}/etc/systemd" mkdir -p "${TUNED_tmpdir}/usr/lib/dracut/hooks/pre-udev" cp /etc/systemd/system.conf "${TUNED_tmpdir}/etc/systemd/" cp 00-tuned-pre-udev.sh "${TUNED_tmpdir}/usr/lib/dracut/hooks/pre-udev/" setup_kvm_mod_low_latency disable_ksm return "$?" } stop() { if [ "$1" = "full_rollback" ] then teardown_kvm_mod_low_latency enable_ksm fi return "$?" } process $@ 07070100000077000081A40000000000000000000000016391BC3A00000ACF000000000000000000000000000000000000004100000000tuned-2.19.0.29+git.b894a3e/profiles/cpu-partitioning/tuned.conf# tuned configuration # [main] summary=Optimize for CPU partitioning include=network-latency [variables] # User is responsible for updating variables.conf with variable content such as isolated_cores=X-Y include=/etc/tuned/cpu-partitioning-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Make sure isolated_cores is defined before any of the variables that # use it (such as assert1) are defined, so that child profiles can set # isolated_cores directly in the profile (tuned.conf) isolated_cores = ${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} # tmpdir tmpdir=${f:strip:${f:exec:mktemp:-d}} # Non-isolated cores cpumask including offline cores isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cpumask=${f:cpulist2hex:${isolated_cores_expanded}} not_isolated_cores_expanded=${f:cpulist_invert:${isolated_cores_expanded}} isolated_cores_online_expanded=${f:cpulist_online:${isolated_cores}} not_isolated_cores_online_expanded=${f:cpulist_online:${not_isolated_cores_expanded}} not_isolated_cpumask=${f:cpulist2hex:${not_isolated_cores_expanded}} # Make sure no_balance_cores is defined before # no_balance_cores_expanded is defined, so that child profiles can set # no_balance_cores directly in the profile (tuned.conf) no_balance_cores=${no_balance_cores} no_balance_cores_expanded=${f:cpulist_unpack:${no_balance_cores}} # Fail if isolated_cores contains CPUs which are not online assert2=${f:assertion:isolated_cores contains online CPU(s):${isolated_cores_expanded}:${isolated_cores_online_expanded}} cmd_isolcpus=${f:regex_search_ternary:${no_balance_cores}:\s*[0-9]: isolcpus=${no_balance_cores}:} [sysctl] kernel.hung_task_timeout_secs = 600 kernel.nmi_watchdog = 0 vm.stat_interval = 10 kernel.timer_migration = 0 [sysfs] /sys/bus/workqueue/devices/writeback/cpumask = ${not_isolated_cpumask} /sys/devices/virtual/workqueue/cpumask = ${not_isolated_cpumask} /sys/devices/virtual/workqueue/*/cpumask = ${not_isolated_cpumask} /sys/devices/system/machinecheck/machinecheck*/ignore_ce = 1 [systemd] cpu_affinity=${not_isolated_cores_expanded} [irqbalance] banned_cpus=${isolated_cores} [script] priority=5 script=${i:PROFILE_DIR}/script.sh [scheduler] isolated_cores=${isolated_cores} ps_blacklist=.*pmd.*;.*PMD.*;^DPDK;.*qemu-kvm.*;^contrail-vroute$;^lcore-slave-.*;^rte_mp_handle$;^rte_mp_async$;^eal-intr-thread$ [bootloader] priority=10 initrd_remove_dir=True initrd_dst_img=tuned-initrd.img initrd_add_dir=${tmpdir} cmdline_cpu_part=+nohz=on${cmd_isolcpus} nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} tuned.non_isolcpus=${not_isolated_cpumask} intel_pstate=disable nosoftlockup 07070100000078000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/profiles/default07070100000079000081A40000000000000000000000016391BC3A000000A5000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/profiles/default/tuned.conf# # tuned configuration # [main] summary=Legacy default tuned profile [cpu] [disk] # Comma separated list of devices, all devices if commented out. # devices=sda 0707010000007A000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/profiles/desktop0707010000007B000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/profiles/desktop-powersave0707010000007C000081A40000000000000000000000016391BC3A000000F9000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/profiles/desktop-powersave/tuned.conf# # tuned configuration # [main] summary=Optmize for the desktop use-case with power saving include=server-powersave [video] radeon_powersave=dpm-battery, auto [net] # Comma separated list of devices, all devices if commented out. # devices=eth0 0707010000007D000081A40000000000000000000000016391BC3A00000088000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/profiles/desktop/tuned.conf# # tuned configuration # [main] summary=Optimize for the desktop use-case include=balanced [sysctl] kernel.sched_autogroup_enabled=1 0707010000007E000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/profiles/enterprise-storage0707010000007F000081A40000000000000000000000016391BC3A00000098000000000000000000000000000000000000004300000000tuned-2.19.0.29+git.b894a3e/profiles/enterprise-storage/tuned.conf# # tuned configuration # [main] summary=Legacy profile for RHEL6, for RHEL7, please use throughput-performance profile include=throughput-performance 07070100000080000081A40000000000000000000000016391BC3A00003C12000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/profiles/functions# # This is library of helper functions that can be used in scripts in tuned profiles. # # API provided by this library is under heavy development and could be changed anytime # # # Config # STORAGE=/run/tuned STORAGE_PERSISTENT=/var/lib/tuned STORAGE_SUFFIX=".save" # # Helpers # # Save value # $0 STORAGE_NAME VALUE save_value() { [ "$#" -ne 2 ] && return [ "$2" -a -e "${STORAGE}" ] && echo "$2" > "${STORAGE}/${1}${STORAGE_SUFFIX}" } # Parse sysfs value, i.e. for "val1 [val2] val3" return "val2" # $0 SYSFS_NAME parse_sys() { local V1 V2 [ -r "$1" ] || return V1=`cat "$1"` V2="${V1##*[}" V2="${V2%%]*}" echo "${V2:-$V1}" } # Save sysfs value # $0 STORAGE_NAME SYSFS_NAME save_sys() { [ "$#" -ne 2 ] && return [ -r "$2" -a ! -e "${STORAGE}/${1}${STORAGE_SUFFIX}" ] && parse_sys "$2" > "${STORAGE}/${1}${STORAGE_SUFFIX}" } # Set sysfs value # $0 SYSFS_NAME VALUE set_sys() { [ "$#" -ne 2 ] && return [ -w "$1" ] && echo "$2" > "$1" } # Save and set sysfs value # $0 STORAGE_NAME SYSFS_NAME VALUE save_set_sys() { [ "$#" -ne 3 ] && return save_sys "$1" "$2" set_sys "$2" "$3" } # Get stored sysfs value from storage # $0 STORAGE_NAME get_stored_sys() { [ "$#" -ne 1 ] && return [ -r "${STORAGE}/${1}${STORAGE_SUFFIX}" ] && cat "${STORAGE}/${1}${STORAGE_SUFFIX}" } # Restore value from storage # $0 STORAGE_NAME restore_value() { [ "$#" -ne 1 ] && return _rs_value="`get_stored_sys \"$1\"`" unlink "${STORAGE}/${1}${STORAGE_SUFFIX}" >/dev/null 2>&1 [ "$_rs_value" ] && echo "$_rs_value" } # Restore sysfs value from storage, if nothing is stored, use VALUE # $0 STORAGE_NAME SYSFS_NAME [VALUE] restore_sys() { [ "$#" -lt 2 -o "$#" -gt 3 ] && return _rs_value="`get_stored_sys \"$1\"`" unlink "${STORAGE}/${1}${STORAGE_SUFFIX}" >/dev/null 2>&1 [ "$_rs_value" ] || _rs_value="$3" [ "$_rs_value" ] && set_sys "$2" "$_rs_value" } # # DISK tuning # DISKS_DEV="$(command ls -d1 /dev/[shv]d*[a-z] 2>/dev/null)" DISKS_SYS="$(command ls -d1 /sys/block/{sd,cciss,dm-,vd,dasd,xvd}* 2>/dev/null)" _check_elevator_override() { /bin/fgrep -q 'elevator=' /proc/cmdline } # $0 OPERATOR DEVICES ELEVATOR _set_elevator_helper() { _check_elevator_override && return SYS_BLOCK_SDX="" [ "$2" ] && SYS_BLOCK_SDX=$(eval LANG=C /bin/ls -1 "${2}" 2>/dev/null) # if there is no kernel command line elevator settings, apply the elevator if [ "$1" -a "$SYS_BLOCK_SDX" ]; then for i in $SYS_BLOCK_SDX; do se_dev="`echo \"$i\" | sed 's|/sys/block/\([^/]\+\)/queue/scheduler|\1|'`" $1 "elevator_${se_dev}" "$i" "$3" done fi } # $0 DEVICES ELEVATOR set_elevator() { _set_elevator_helper save_set_sys "$1" "$2" } # $0 DEVICES [ELEVATOR] restore_elevator() { re_elevator="$2" [ "$re_elevator" ] || re_elevator=cfq _set_elevator_helper restore_sys "$1" "$re_elevator" } # SATA Aggressive Link Power Management # usage: set_disk_alpm policy set_disk_alpm() { policy=$1 for host in /sys/class/scsi_host/*; do if [ -f $host/ahci_port_cmd ]; then port_cmd=`cat $host/ahci_port_cmd`; if [ $((0x$port_cmd & 0x240000)) = 0 -a -f $host/link_power_management_policy ]; then echo $policy >$host/link_power_management_policy; else echo "max_performance" >$host/link_power_management_policy; fi fi done } # usage: set_disk_apm level set_disk_apm() { level=$1 for disk in $DISKS_DEV; do hdparm -B $level $disk &>/dev/null done } # usage: set_disk_spindown level set_disk_spindown() { level=$1 for disk in $DISKS_DEV; do hdparm -S $level $disk &>/dev/null done } # usage: multiply_disk_readahead by multiply_disk_readahead() { by=$1 # float multiplication not supported in bash # bc might not be installed, python is available for sure for disk in $DISKS_SYS; do control="${disk}/queue/read_ahead_kb" old=$(cat $control) new=$(echo "print int($old*$by)" | python) (echo $new > $control) &>/dev/null done } # usage: remount_disk options partition1 partition2 ... remount_partitions() { options=$1 shift for partition in $@; do mount -o remount,$options $partition >/dev/null 2>&1 done } remount_all_no_rootboot_partitions() { [ "$1" ] || return # Find non-root and non-boot partitions, disable barriers on them rootvol=$(df -h / | grep "^/dev" | awk '{print $1}') bootvol=$(df -h /boot | grep "^/dev" | awk '{print $1}') volumes=$(df -hl --exclude=tmpfs | grep "^/dev" | awk '{print $1}') nobarriervols=$(echo "$volumes" | grep -v $rootvol | grep -v $bootvol) remount_partitions "$1" $nobarriervols } DISK_QUANTUM_SAVE="${STORAGE}/disk_quantum${STORAGE_SUFFIX}" set_disk_scheduler_quantum() { value=$1 rm -f "$DISK_QUANTUM_SAVE" for disk in $DISKS_SYS; do control="${disk}/queue/iosched/quantum" echo "echo $(cat $control) > $control" >> "$DISK_QUANTUM_SAVE" 2>/dev/null (echo $value > $control) &2>/dev/null done } restore_disk_scheduler_quantum() { if [ -r "$DISK_QUANTUM_SAVE" ]; then /bin/sh "$DISK_QUANTUM_SAVE" &>/dev/null rm -f "$DISK_QUANTUM_SAVE" fi } # # CPU tuning # CPUSPEED_SAVE_FILE="${STORAGE}/cpuspeed${STORAGE_SUFFIX}" CPUSPEED_ORIG_GOV="${STORAGE}/cpuspeed-governor-%s${STORAGE_SUFFIX}" CPUSPEED_STARTED="${STORAGE}/cpuspeed-started" CPUSPEED_CFG="/etc/sysconfig/cpuspeed" CPUSPEED_INIT="/etc/rc.d/init.d/cpuspeed" # do not use cpuspeed CPUSPEED_USE="0" CPUS="$(ls -d1 /sys/devices/system/cpu/cpu* | sed 's;^.*/;;' | grep "cpu[0-9]\+")" # set CPU governor setting and store the old settings # usage: set_cpu_governor governor set_cpu_governor() { governor=$1 # always patch cpuspeed configuration if exists, if it doesn't exist and is enabled, # explictly disable it with hint if [ -e $CPUSPEED_INIT ]; then if [ ! -e $CPUSPEED_SAVE_FILE -a -e $CPUSPEED_CFG ]; then cp -p $CPUSPEED_CFG $CPUSPEED_SAVE_FILE sed -e 's/^GOVERNOR=.*/GOVERNOR='$governor'/g' $CPUSPEED_SAVE_FILE > $CPUSPEED_CFG fi else if [ "$CPUSPEED_USE" = "1" ]; then echo >&2 echo "Suggestion: install 'cpuspeed' package to get best tuning results." >&2 echo "Falling back to sysfs control." >&2 echo >&2 fi CPUSPEED_USE="0" fi if [ "$CPUSPEED_USE" = "1" ]; then service cpuspeed status &> /dev/null [ $? -eq 3 ] && touch $CPUSPEED_STARTED || rm -f $CPUSPEED_STARTED service cpuspeed restart &> /dev/null # direct change using sysfs elif [ -e /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ]; then for cpu in $CPUS; do gov_file=/sys/devices/system/cpu/$cpu/cpufreq/scaling_governor save_file=$(printf $CPUSPEED_ORIG_GOV $cpu) rm -f $save_file if [ -e $gov_file ]; then cat $gov_file > $save_file echo $governor > $gov_file fi done fi } # re-enable previous CPU governor settings # usage: restore_cpu_governor restore_cpu_governor() { if [ -e $CPUSPEED_INIT ]; then if [ -e $CPUSPEED_SAVE_FILE ]; then cp -fp $CPUSPEED_SAVE_FILE $CPUSPEED_CFG rm -f $CPUSPEED_SAVE_FILE fi if [ "$CPUSPEED_USE" = "1" ]; then if [ -e $CPUSPEED_STARTED ]; then service cpuspeed stop &> /dev/null else service cpuspeed restart &> /dev/null fi fi if [ -e $CPUSPEED_STARTED ]; then rm -f $CPUSPEED_STARTED fi else CPUSPEED_USE="0" fi if [ "$CPUSPEED_USE" != "1" -a -e /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor ]; then for cpu in $CPUS; do cpufreq_dir=/sys/devices/system/cpu/$cpu/cpufreq save_file=$(printf $CPUSPEED_ORIG_GOV $cpu) if [ -e $cpufreq_dir/scaling_governor ]; then if [ -e $save_file ]; then cat $save_file > $cpufreq_dir/scaling_governor rm -f $save_file else echo userspace > $cpufreq_dir/scaling_governor cat $cpufreq_dir/cpuinfo_max_freq > $cpufreq_dir/scaling_setspeed fi fi done fi } _cpu_multicore_powersave() { value=$1 [ -e /sys/devices/system/cpu/sched_mc_power_savings ] && echo $value > /sys/devices/system/cpu/sched_mc_power_savings } # enable multi core power savings for low wakeup systems enable_cpu_multicore_powersave() { _cpu_multicore_powersave 1 } disable_cpu_multicore_powersave() { _cpu_multicore_powersave 0 } # # MEMORY tuning # THP_ENABLE="/sys/kernel/mm/transparent_hugepage/enabled" THP_SAVE="${STORAGE}/thp${STORAGE_SUFFIX}" [ -e "$THP_ENABLE" ] || THP_ENABLE="/sys/kernel/mm/redhat_transparent_hugepage/enabled" enable_transparent_hugepages() { if [ -e $THP_ENABLE ]; then cut -f2 -d'[' $THP_ENABLE | cut -f1 -d']' > $THP_SAVE (echo always > $THP_ENABLE) &> /dev/null fi } restore_transparent_hugepages() { if [ -e $THP_SAVE ]; then (echo $(cat $THP_SAVE) > $THP_ENABLE) &> /dev/null rm -f $THP_SAVE fi } # # WIFI tuning # # usage: _wifi_set_power_level level _wifi_set_power_level() { # 0 auto, PM enabled # 1-5 least savings and lowest latency - most savings and highest latency # 6 disable power savings level=$1 # apply the settings using iwpriv ifaces=$(cat /proc/net/wireless | grep -v '|' | sed 's@^ *\([^:]*\):.*@\1@') for iface in $ifaces; do iwpriv $iface set_power $level done # some adapters may relay on sysfs for i in /sys/bus/pci/devices/*/power_level; do (echo $level > $i) &> /dev/null done } enable_wifi_powersave() { _wifi_set_power_level 5 } disable_wifi_powersave() { _wifi_set_power_level 0 } # # BLUETOOTH tuning # disable_bluetooth() { hciconfig hci0 down >/dev/null 2>&1 lsmod | grep -q btusb && rmmod btusb } enable_bluetooth() { modprobe btusb hciconfig hci0 up >/dev/null 2>&1 } # # USB tuning # _usb_autosuspend() { value=$1 for i in /sys/bus/usb/devices/*/power/autosuspend; do echo $value > $i; done &> /dev/null } enable_usb_autosuspend() { _usb_autosuspend 1 } disable_usb_autosuspend() { _usb_autosuspend 0 } # # SOUND CARDS tuning # enable_snd_ac97_powersave() { save_set_sys ac97 /sys/module/snd_ac97_codec/parameters/power_save Y } disable_snd_ac97_powersave() { save_set_sys ac97 /sys/module/snd_ac97_codec/parameters/power_save N } restore_snd_ac97_powersave() { restore_sys ac97 /sys/module/snd_ac97_codec/parameters/power_save $1 } set_hda_intel_powersave() { save_set_sys hda_intel /sys/module/snd_hda_intel/parameters/power_save $1 } restore_hda_intel_powersave() { restore_sys hda_intel /sys/module/snd_hda_intel/parameters/power_save $1 } # # VIDEO CARDS tuning # # Power savings settings for Radeon # usage: set_radeon_powersave dynpm | default | low | mid | high set_radeon_powersave () { [ "$1" ] || return [ -e /sys/class/drm/card0/device/power_method ] || return if [ "$1" = default -o "$1" = auto -o "$1" = low -o "$1" = med -o "$1" = high ]; then [ -w /sys/class/drm/card0/device/power_profile ] || return save_sys radeon_profile /sys/class/drm/card0/device/power_profile save_set_sys radeon_method /sys/class/drm/card0/device/power_method profile set_sys /sys/class/drm/card0/device/power_profile "$1" elif [ "$1" = dynpm ]; then save_sys radeon_profile /sys/class/drm/card0/device/power_profile save_set_sys radeon_method /sys/class/drm/card0/device/power_method dynpm fi } restore_radeon_powersave () { restore_sys radeon_method /sys/class/drm/card0/device/power_method profile _rrp_method="`get_stored_sys radeon_method`" [ -z "$_rrp_method" -o _rrp_method="profile" ] && restore_sys radeon_profile /sys/class/drm/card0/device/power_profile default } # # SOFTWARE tuning # RSYSLOG_CFG="/etc/rsyslog.conf" RSYSLOG_SAVE="${STORAGE}/cpuspeed${STORAGE_SUFFIX}" disable_logs_syncing() { cp -p $RSYSLOG_CFG $RSYSLOG_SAVE sed -i 's/ \/var\/log/-\/var\/log/' $RSYSLOG_CFG } restore_logs_syncing() { mv -Z $RSYSLOG_SAVE $RSYSLOG_CFG || mv $RSYSLOG_SAVE $RSYSLOG_CFG } irqbalance_banned_cpus_clear() { sed -i '/^IRQBALANCE_BANNED_CPUS=/d' /etc/sysconfig/irqbalance || return if [ ${1:-restart} = restart ]; then systemctl try-restart irqbalance fi } irqbalance_banned_cpus_setup() { irqbalance_banned_cpus_clear norestart if [ -n "$1" ]; then echo "IRQBALANCE_BANNED_CPUS=$1" >> /etc/sysconfig/irqbalance fi systemctl try-restart irqbalance } # # HARDWARE SPECIFIC tuning # # Asus EEE with Intel Atom _eee_fsb_control() { value=$1 if [ -e /sys/devices/platform/eeepc/she ]; then echo $value > /sys/devices/platform/eeepc/she elif [ -e /sys/devices/platform/eeepc/cpufv ]; then echo $value > /sys/devices/platform/eeepc/cpufv elif [ -e /sys/devices/platform/eeepc-wmi/cpufv ]; then echo $value > /sys/devices/platform/eeepc-wmi/cpufv fi } eee_set_reduced_fsb() { _eee_fsb_control 2 } eee_set_normal_fsb() { _eee_fsb_control 1 } # # modprobe configuration handling # kvm_modprobe_file=/etc/modprobe.d/kvm.rt.tuned.conf teardown_kvm_mod_low_latency() { rm -f $kvm_modprobe_file } setup_kvm_mod_low_latency() { local HAS_KPS="" local HAS_NX_HP="" local HAS_PLE_GAP="" local WANTS_KPS="" local WANTS_NX_HP="" local WANTS_PLE_GAP="" modinfo -p kvm | grep -q kvmclock_periodic_sync && HAS_KPS=1 modinfo -p kvm | grep -q nx_huge_pages && HAS_NX_HP=1 modinfo -p kvm_intel | grep -q ple_gap && HAS_PLE_GAP=1 grep -qs kvmclock_periodic_sync "$kvm_modprobe_file" && WANTS_KPS=1 grep -qs nx_huge_pages "$kvm_modprobe_file" && WANTS_NX_HP=1 grep -qs ple_gap "$kvm_modprobe_file" && WANTS_PLE_GAP=1 if [ "$HAS_KPS" != "$WANTS_KPS" -o "$HAS_PLE_GAP" != "$WANTS_PLE_GAP" -o \ "$HAS_NX_HP" != "$WANTS_NX_HP" ]; then teardown_kvm_mod_low_latency [ "$HAS_KPS" ] && echo "options kvm kvmclock_periodic_sync=0" > $kvm_modprobe_file [ "$HAS_NX_HP" ] && echo "options kvm nx_huge_pages=0" >> $kvm_modprobe_file [ "$HAS_PLE_GAP" ] && echo "options kvm_intel ple_gap=0" >> $kvm_modprobe_file fi return 0 } # # KSM # KSM_SERVICES="ksm ksmtuned" KSM_RUN_PATH=/sys/kernel/mm/ksm/run KSM_MASK_FILE="${STORAGE_PERSISTENT}/ksm-masked" disable_ksm() { if [ ! -f $KSM_MASK_FILE ]; then # Always create $KSM_MASK_FILE, since we don't want to # run any systemctl commands during boot if ! touch $KSM_MASK_FILE; then die "failed to create $KSM_MASK_FILE" fi # Do not run any systemctl commands if $KSM_SERVICES units do not exist systemctl cat -- $KSM_SERVICES &> /dev/null || return systemctl --now --quiet mask $KSM_SERVICES # Unmerge all shared pages test -f $KSM_RUN_PATH && echo 2 > $KSM_RUN_PATH fi } # Should only be called when full_rollback == true enable_ksm() { if [ -f $KSM_MASK_FILE ]; then # Do not run any systemctl commands if $KSM_SERVICES units do not exist systemctl cat -- $KSM_SERVICES &> /dev/null || return if systemctl --quiet unmask $KSM_SERVICES; then rm -f $KSM_MASK_FILE fi fi } die() { echo "$@" >&2 exit 1 } # # ACTION PROCESSING # error_not_implemented() { echo "tuned: script function '$1' is not implemented." >&2 } # implicit actions, will be used if not provided by profile script: # # * start must be implemented # * stop must be implemented start() { error_not_implemented start return 16 } stop() { error_not_implemented stop return 16 } # # main processing # process() { ARG="$1" shift case "$ARG" in start) start "$@" RETVAL=$? ;; stop) stop "$@" RETVAL=$? ;; verify) if declare -f verify &> /dev/null; then verify "$@" else : fi RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|verify}" RETVAL=2 ;; esac exit $RETVAL } 07070100000081000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003100000000tuned-2.19.0.29+git.b894a3e/profiles/hpc-compute07070100000082000081A40000000000000000000000016391BC3A00000570000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/profiles/hpc-compute/tuned.conf# # tuned configuration # [main] summary=Optimize for HPC compute workloads description=Configures virtual memory, CPU governors, and network settings for HPC compute workloads. include=latency-performance [vm] # Most HPC application can take advantage of hugepages. Force them to on. transparent_hugepages=always [disk] # Increase the readahead value to support large, contiguous, files. readahead=>4096 [sysctl] # Keep a reasonable amount of memory free to support large mem requests vm.min_free_kbytes=135168 # Most HPC applications are NUMA aware. Enabling zone reclaim ensures # memory is reclaimed and reallocated from local pages. Disabling # automatic NUMA balancing prevents unwanted memory unmapping. vm.zone_reclaim_mode=1 kernel.numa_balancing=0 # Busy polling helps reduce latency in the network receive path # by allowing socket layer code to poll the receive queue of a # network device, and disabling network interrupts. # busy_read value greater than 0 enables busy polling. Recommended # net.core.busy_read value is 50. # busy_poll value greater than 0 enables polling globally. # Recommended net.core.busy_poll value is 50 net.core.busy_read=50 net.core.busy_poll=50 # TCP fast open reduces network latency by enabling data exchange # during the sender's initial TCP SYN. The value 3 enables fast open # on client and server connections. net.ipv4.tcp_fastopen=3 07070100000083000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/profiles/intel-sst07070100000084000081A40000000000000000000000016391BC3A00000075000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/profiles/intel-sst/tuned.conf[main] summary=Configure for Intel Speed Select Base Frequency [bootloader] cmdline_intel_sst=-intel_pstate=disable 07070100000085000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/profiles/laptop-ac-powersave07070100000086000081ED0000000000000000000000016391BC3A0000009A000000000000000000000000000000000000004300000000tuned-2.19.0.29+git.b894a3e/profiles/laptop-ac-powersave/script.sh#!/bin/sh . /usr/lib/tuned/functions start() { enable_wifi_powersave return 0 } stop() { disable_wifi_powersave return 0 } process $@ 07070100000087000081A40000000000000000000000016391BC3A00000097000000000000000000000000000000000000004400000000tuned-2.19.0.29+git.b894a3e/profiles/laptop-ac-powersave/tuned.conf# # tuned configuration # [main] summary=Optimize for laptop with power savings include=desktop-powersave [script] script=${i:PROFILE_DIR}/script.sh 07070100000088000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/profiles/laptop-battery-powersave07070100000089000081A40000000000000000000000016391BC3A00000076000000000000000000000000000000000000004900000000tuned-2.19.0.29+git.b894a3e/profiles/laptop-battery-powersave/tuned.conf# # tuned configuration # [main] summary=Optimize laptop profile with more aggressive power saving include=powersave 0707010000008A000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/profiles/latency-performance0707010000008B000081A40000000000000000000000016391BC3A00000478000000000000000000000000000000000000004400000000tuned-2.19.0.29+git.b894a3e/profiles/latency-performance/tuned.conf# # tuned configuration # [main] summary=Optimize for deterministic performance at the cost of increased power consumption [cpu] force_latency=cstate.id_no_zero:1|3 governor=performance energy_perf_bias=performance min_perf_pct=100 [sysctl] # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio=10 # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio=3 # The swappiness parameter controls the tendency of the kernel to move # processes out of physical memory and onto the swap disk. # 0 tells the kernel to avoid swapping processes out of physical memory # for as long as possible # 100 tells the kernel to aggressively swap processes out of physical memory # and move them to swap cache vm.swappiness=10 0707010000008C000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/profiles/mssql0707010000008D000081A40000000000000000000000016391BC3A0000029E000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/profiles/mssql/tuned.conf# # tuned configuration # [main] summary=Optimize for Microsoft SQL Server include=throughput-performance [cpu] force_latency=5 [vm] # For multi-instance SQL deployments use 'madvise' instead of 'always' transparent_hugepages=always [sysctl] vm.swappiness=1 vm.dirty_background_ratio=3 vm.dirty_ratio=80 vm.dirty_expire_centisecs=500 vm.dirty_writeback_centisecs=100 vm.max_map_count=1600000 net.core.rmem_default=262144 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048576 kernel.numa_balancing=0 [scheduler] sched_latency_ns=60000000 sched_migration_cost_ns=500000 sched_min_granularity_ns=15000000 sched_wakeup_granularity_ns=2000000 0707010000008E000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/profiles/network-latency0707010000008F000081A40000000000000000000000016391BC3A0000017E000000000000000000000000000000000000004000000000tuned-2.19.0.29+git.b894a3e/profiles/network-latency/tuned.conf# # tuned configuration # [main] summary=Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance include=latency-performance [vm] transparent_hugepages=never [sysctl] net.core.busy_read=50 net.core.busy_poll=50 net.ipv4.tcp_fastopen=3 kernel.numa_balancing=0 [bootloader] cmdline_network_latency=skew_tick=1 07070100000090000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/profiles/network-throughput07070100000091000081A40000000000000000000000016391BC3A000001F2000000000000000000000000000000000000004300000000tuned-2.19.0.29+git.b894a3e/profiles/network-throughput/tuned.conf# # tuned configuration # [main] summary=Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks include=throughput-performance [sysctl] # Increase kernel buffer size maximums. Currently this seems only necessary at 40Gb speeds. # # The buffer tuning values below do not account for any potential hugepage allocation. # Ensure that you do not oversubscribe system memory. net.ipv4.tcp_rmem="4096 87380 16777216" net.ipv4.tcp_wmem="4096 16384 16777216" 07070100000092000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/profiles/openshift07070100000093000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/profiles/openshift-control-plane07070100000094000081A40000000000000000000000016391BC3A0000006D000000000000000000000000000000000000004800000000tuned-2.19.0.29+git.b894a3e/profiles/openshift-control-plane/tuned.conf# # tuned configuration # [main] summary=Optimize systems running OpenShift control plane include=openshift 07070100000095000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/profiles/openshift-node07070100000096000081A40000000000000000000000016391BC3A000000CC000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/profiles/openshift-node/tuned.conf# # tuned configuration # [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 07070100000097000081A40000000000000000000000016391BC3A00000397000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/profiles/openshift/tuned.conf# # tuned configuration # [main] summary=Optimize systems running OpenShift (parent profile) include=${f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=1048576 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 fs.aio-max-nr=>1048576 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 [scheduler] # see rhbz#1979352; exclude containers from aligning to house keeping CPUs cgroup_ps_blacklist=/kubepods\.slice/ # workaround for rhbz#1921738 runtime=0 07070100000098000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/profiles/optimize-serial-console07070100000099000081A40000000000000000000000016391BC3A000000EF000000000000000000000000000000000000004800000000tuned-2.19.0.29+git.b894a3e/profiles/optimize-serial-console/tuned.conf# # tuned configuration # # This tuned configuration optimizes for serial console performance at the # expense of reduced debug information to the console. [main] summary=Optimize for serial console use. [sysctl] kernel.printk="4 4 1 7" 0707010000009A000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/profiles/oracle0707010000009B000081A40000000000000000000000016391BC3A00000285000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/profiles/oracle/tuned.conf# # tuned configuration # [main] summary=Optimize for Oracle RDBMS include=throughput-performance [sysctl] vm.swappiness = 10 vm.dirty_background_ratio = 3 vm.dirty_ratio = 40 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 kernel.shmmax = 4398046511104 kernel.shmall = 1073741824 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6815744 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65499 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 kernel.panic_on_oops = 1 kernel.numa_balancing = 0 [vm] transparent_hugepages=never 0707010000009C000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003000000000tuned-2.19.0.29+git.b894a3e/profiles/postgresql0707010000009D000081A40000000000000000000000016391BC3A00000765000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/profiles/postgresql/tuned.conf# # tuned configuration for PostgreSQL servers # [main] summary=Optimize for PostgreSQL server include=throughput-performance [cpu] # The alternation of CPU bound load and disk IO operations of postgresql # db server suggest CPU to go into powersave mode. # # Explicitly disable deep c-states to reduce latency on OLTP workloads. force_latency=1 [vm] transparent_hugepages=never [sysctl] # The dirty_background_ratio and dirty_ratio controls percentage of memory # that file system cache have to fill with dirty data before kernel will # will start to flush data to disks. The default values are 10% and 20% # accordingly. On a systems with a big amount of memory this values can # be tens of gigabytes and produce IO spikes when PostgreSQL server writes # checkpoints. # # Keep this values reasonable small - about size of RAID controller write-back # cache size (typcal 512MB - 2GB). vm.dirty_background_ratio = 0 vm.dirty_ratio = 0 vm.dirty_background_bytes = 67108864 vm.dirty_bytes = 536870912 # The swappiness parameter controls the tendency of the kernel to move # processes out of physical memory and onto the swap disk. # 0 tells the kernel to avoid swapping processes out of physical memory # for as long as possible # 100 tells the kernel to aggressively swap processes out of physical memory # and move them to swap cache vm.swappiness=3 # The autogroup feature of the CFS # (system default is 1, e.q enabled) kernel.sched_autogroup_enabled = 0 [scheduler] # ktune sysctl settings for rhel6 servers, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) sched_min_granularity_ns = 10000000 # The total time the scheduler will consider a migrated process # "cache hot" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) sched_migration_cost_ns = 50000000 0707010000009E000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/profiles/powersave0707010000009F000081ED0000000000000000000000016391BC3A0000010D000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/profiles/powersave/script.sh#!/bin/sh . /usr/lib/tuned/functions start() { [ "$USB_AUTOSUSPEND" = 1 ] && enable_usb_autosuspend enable_wifi_powersave return 0 } stop() { [ "$USB_AUTOSUSPEND" = 1 ] && disable_usb_autosuspend disable_wifi_powersave return 0 } process $@ 070701000000A0000081A40000000000000000000000016391BC3A0000022D000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/profiles/powersave/tuned.conf# # tuned configuration # [main] summary=Optimize for low power consumption [cpu] governor=ondemand|powersave energy_perf_bias=powersave|power [eeepc_she] [vm] [audio] timeout=10 [video] radeon_powersave=dpm-battery, auto [disk] # Comma separated list of devices, all devices if commented out. # devices=sda [net] # Comma separated list of devices, all devices if commented out. # devices=eth0 [scsi_host] alpm=min_power [sysctl] vm.laptop_mode=5 vm.dirty_writeback_centisecs=1500 kernel.nmi_watchdog=0 [script] script=${i:PROFILE_DIR}/script.sh 070701000000A1000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/profiles/realtime070701000000A2000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-guest070701000000A3000081A40000000000000000000000016391BC3A0000032F000000000000000000000000000000000000006200000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-guest/realtime-virtual-guest-variables.conf# # Variable settings below override the definitions from the # /etc/tuned/realtime-variables.conf file. # # Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # # Reserve 1 core per socket for housekeeping, isolate the rest. isolated_cores=${f:calc_isolated_cores:1} # # Uncomment the 'isolate_managed_irq=Y' bellow if you want to move kernel # managed IRQs out of isolated cores. Note that this requires kernel # support. Please only specify this parameter if you are sure that the # kernel supports it. # isolate_managed_irq=Y # # Set the desired combined queue count value using the parameter provided # below. Ideally this should be set to the number of housekeeping CPUs i.e., # in the example given below it is assumed that the system has 4 housekeeping # (non-isolated) CPUs. # # netdev_queue_count=4 070701000000A4000081ED0000000000000000000000016391BC3A00000080000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-guest/script.sh#!/bin/sh . /usr/lib/tuned/functions start() { return 0 } stop() { return 0 } verify() { return 0 } process $@ 070701000000A5000081A40000000000000000000000016391BC3A00000704000000000000000000000000000000000000004700000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-guest/tuned.conf# # tuned configuration # [main] summary=Optimize for realtime workloads running within a KVM guest include=realtime [variables] # User is responsible for adding isolated_cores=X-Y to realtime-virtual-guest-variables.conf include=/etc/tuned/realtime-virtual-guest-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cores_online_expanded=${f:cpulist_online:${isolated_cores}} non_isolated_cores=${f:cpulist_invert:${isolated_cores}} # Fail if isolated_cores contains CPUs which are not online assert2=${f:assertion:isolated_cores contains online CPU(s):${isolated_cores_expanded}:${isolated_cores_online_expanded}} [scheduler] # group.group_name=rule_priority:scheduler_policy:scheduler_priority:core_affinity_in_hex:process_name_regex # for i in `pgrep ksoftirqd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ksoftirqd=0:f:2:*:^\[ksoftirqd group.ktimers=0:f:2:*:^\[ktimers # for i in `pgrep rcuc` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcuc=0:f:4:*:^\[rcuc # for i in `pgrep rcub` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcub=0:f:4:*:^\[rcub # for i in `pgrep ktimersoftd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ktimersoftd=0:f:3:*:^\[ktimersoftd ps_blacklist=^\[ksoftirqd;^\[ktimers;^\[rcuc;^\[rcub;^\[ktimersoftd [sysfs] # Perform lockless check for timer softirq on isolated CPUs. # /sys/kernel/ktimer_lockless_check = 1 [script] script=${i:PROFILE_DIR}/script.sh [bootloader] cmdline_rvg=+nohz=on nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} irqaffinity=${non_isolated_cores} 070701000000A6000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-host070701000000A7000081A40000000000000000000000016391BC3A0000032D000000000000000000000000000000000000006000000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-host/realtime-virtual-host-variables.conf# # Variable settings below override the definitions from the # /etc/tuned/realtime-variables.conf file. # # Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # # Reserve 1 core per socket for housekeeping, isolate the rest. isolated_cores=${f:calc_isolated_cores:1} # Uncomment the 'isolate_managed_irq=Y' bellow if you want to move kernel # managed IRQs out of isolated cores. Note that this requires kernel # support. Please only specify this parameter if you are sure that the # kernel supports it. # isolate_managed_irq=Y # # Set the desired combined queue count value using the parameter provided # below. Ideally this should be set to the number of housekeeping CPUs i.e., # in the example given below it is assumed that the system has 4 housekeeping # (non-isolated) CPUs. # # netdev_queue_count=4 070701000000A8000081ED0000000000000000000000016391BC3A0000058C000000000000000000000000000000000000004500000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-host/script.sh#!/bin/sh . /usr/lib/tuned/functions start() { setup_kvm_mod_low_latency return 0 } stop() { if [ "$1" = "full_rollback" ]; then teardown_kvm_mod_low_latency fi return "$?" } verify() { retval=0 # set via /etc/modprobe.d/kvm.conf and /etc/modprobe.d/kvm.rt.tuned.conf if [ -f /sys/module/kvm/parameters/kvmclock_periodic_sync ]; then kps=$(cat /sys/module/kvm/parameters/kvmclock_periodic_sync) if [ "$kps" = "N" -o "$kps" = "0" ]; then echo " kvmclock_periodic_sync:($kps): disabled: okay" else echo " kvmclock_periodic_sync:($kps): enabled: expected N(0)" retval=1 fi fi if [ -f /sys/module/kvm_intel/parameters/ple_gap ]; then ple_gap=$(cat /sys/module/kvm_intel/parameters/ple_gap) if [ $ple_gap -eq 0 ]; then echo " ple_gap:($ple_gap): disabled: okay" else echo " ple_gap:($ple_gap): enabled: expected 0" retval=1 fi fi if [ -f /sys/module/kvm/parameters/nx_huge_pages ]; then kps=$(cat /sys/module/kvm/parameters/nx_huge_pages) if [ "$kps" = "N" -o "$kps" = "0" ]; then echo " kvmclock_periodic_sync:($kps): disabled: okay" else echo " kvmclock_periodic_sync:($kps): enabled: expected N(0)" retval=1 fi fi return $retval } process $@ 070701000000A9000081A40000000000000000000000016391BC3A000007F3000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/profiles/realtime-virtual-host/tuned.conf# # tuned configuration # # Dependencies: # # - tuna # - awk # - wc [main] summary=Optimize for KVM guests running realtime workloads include=realtime [variables] # User is responsible for adding isolated_cores=X-Y to realtime-virtual-host-variables.conf include=/etc/tuned/realtime-virtual-host-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cores_online_expanded=${f:cpulist_online:${isolated_cores}} non_isolated_cores=${f:cpulist_invert:${isolated_cores}} # Fail if isolated_cores contains CPUs which are not online assert2=${f:assertion:isolated_cores contains online CPU(s):${isolated_cores_expanded}:${isolated_cores_online_expanded}} [scheduler] # group.group_name=rule_priority:scheduler_policy:scheduler_priority:core_affinity_in_hex:process_name_regex # for i in `pgrep ksoftirqd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ksoftirqd=0:f:2:*:^\[ksoftirqd group.ktimers=0:f:2:*:^\[ktimers # for i in `pgrep rcuc` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcuc=0:f:4:*:^\[rcuc # for i in `pgrep rcub` ; do grep Cpus_allowed_list /proc/$i/status ; done group.rcub=0:f:4:*:^\[rcub # for i in `pgrep ktimersoftd` ; do grep Cpus_allowed_list /proc/$i/status ; done group.ktimersoftd=0:f:3:*:^\[ktimersoftd ps_blacklist=^\[ksoftirqd;^\[ktimers;^\[rcuc;^\[rcub;^\[ktimersoftd;pmd;PMD;^DPDK;qemu-kvm [sysfs] # Stop kernel same page merge (KSM) daemon, it may introduce latencies. # # 0: stop ksmd, keep merged pages # 1: run ksmd # 2: stop ksmd, unmerge all pages /sys/kernel/mm/ksm/run = 2 # Perform lockless check for timer softirq on isolated CPUs. # /sys/kernel/ktimer_lockless_check = 1 [script] script=${i:PROFILE_DIR}/script.sh [bootloader] cmdline_rvh=+nohz=on nohz_full=${isolated_cores} rcu_nocbs=${isolated_cores} irqaffinity=${non_isolated_cores} 070701000000AA000081A40000000000000000000000016391BC3A000002C4000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/profiles/realtime/realtime-variables.conf# Examples: # isolated_cores=2,4-7 # isolated_cores=2-23 # # Reserve 1 core per socket for housekeeping, isolate the rest. isolated_cores=${f:calc_isolated_cores:1} # # Uncomment the 'isolate_managed_irq=Y' bellow if you want to move kernel # managed IRQs out of isolated cores. Note that this requires kernel # support. Please only specify this parameter if you are sure that the # kernel supports it. # isolate_managed_irq=Y # # Set the desired combined queue count value using the parameter provided # below. Ideally this should be set to the number of housekeeping CPUs i.e., # in the example given below it is assumed that the system has 4 housekeeping # (non-isolated) CPUs. # # netdev_queue_count=4 070701000000AB000081ED0000000000000000000000016391BC3A00000111000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/profiles/realtime/script.sh#!/bin/sh . /usr/lib/tuned/functions start() { return 0 } stop() { return 0 } verify() { retval=0 if [ "$TUNED_isolated_cores" ]; then tuna -c "$TUNED_isolated_cores" -P > /dev/null 2>&1 retval=$? fi return $retval } process $@ 070701000000AC000081A40000000000000000000000016391BC3A0000095E000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/profiles/realtime/tuned.conf# tuned configuration # # Red Hat Enterprise Linux for Real Time Documentation: # https://docs.redhat.com [main] summary=Optimize for realtime workloads include = network-latency [variables] # User is responsible for updating variables.conf with variable content such as isolated_cores=X-Y include = /etc/tuned/realtime-variables.conf isolated_cores_assert_check = \\${isolated_cores} # Make sure isolated_cores is defined before any of the variables that # use it (such as assert1) are defined, so that child profiles can set # isolated_cores directly in the profile (tuned.conf) isolated_cores = ${isolated_cores} # Fail if isolated_cores are not set assert1=${f:assertion_non_equal:isolated_cores are set:${isolated_cores}:${isolated_cores_assert_check}} # Non-isolated cores cpumask including offline cores not_isolated_cpumask = ${f:cpulist2hex_invert:${isolated_cores}} isolated_cores_expanded=${f:cpulist_unpack:${isolated_cores}} isolated_cpumask=${f:cpulist2hex:${isolated_cores_expanded}} isolated_cores_online_expanded=${f:cpulist_online:${isolated_cores}} # Fail if isolated_cores contains CPUs which are not online assert2=${f:assertion:isolated_cores contains online CPU(s):${isolated_cores_expanded}:${isolated_cores_online_expanded}} # Assembly managed_irq # Make sure isolate_managed_irq is defined before any of the variables that # use it (such as managed_irq) are defined, so that child profiles can set # isolate_managed_irq directly in the profile (tuned.conf) isolate_managed_irq = ${isolate_managed_irq} managed_irq=${f:regex_search_ternary:${isolate_managed_irq}:\b[y,Y,1,t,T]\b:managed_irq,domain,:} [net] channels=combined ${f:check_net_queue_count:${netdev_queue_count}} [sysctl] kernel.hung_task_timeout_secs = 600 kernel.nmi_watchdog = 0 kernel.sched_rt_runtime_us = -1 vm.stat_interval = 10 kernel.timer_migration = 0 [sysfs] /sys/bus/workqueue/devices/writeback/cpumask = ${not_isolated_cpumask} /sys/devices/virtual/workqueue/cpumask = ${not_isolated_cpumask} /sys/devices/virtual/workqueue/*/cpumask = ${not_isolated_cpumask} /sys/devices/system/machinecheck/machinecheck*/ignore_ce = 1 [bootloader] cmdline_realtime=+isolcpus=${managed_irq}${isolated_cores} intel_pstate=disable nosoftlockup tsc=reliable [irqbalance] banned_cpus=${isolated_cores} [script] script = ${i:PROFILE_DIR}/script.sh [scheduler] isolated_cores=${isolated_cores} [rtentsk] 070701000000AD000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/profiles/sap-hana070701000000AE000081A40000000000000000000000016391BC3A00000163000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/profiles/sap-hana/tuned.conf# # tuned configuration # [main] summary=Optimize for SAP HANA [cpu] force_latency=cstate.id_no_zero:3|70 governor=performance energy_perf_bias=performance min_perf_pct=100 [vm] transparent_hugepages=never [sysctl] kernel.sem = 32000 1024000000 500 32000 kernel.numa_balancing = 0 vm.dirty_ratio = 40 vm.dirty_background_ratio = 10 vm.swappiness = 10 070701000000AF000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/profiles/sap-netweaver070701000000B0000081A40000000000000000000000016391BC3A000000FB000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/profiles/sap-netweaver/tuned.conf# # tuned configuration # [main] summary=Optimize for SAP NetWeaver include=throughput-performance [sysctl] kernel.sem = 32000 1024000000 500 32000 kernel.shmall = 18446744073692774399 kernel.shmmax = 18446744073692774399 vm.max_map_count = 2000000 070701000000B1000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/profiles/server-powersave070701000000B2000081A40000000000000000000000016391BC3A00000077000000000000000000000000000000000000004100000000tuned-2.19.0.29+git.b894a3e/profiles/server-powersave/tuned.conf# # tuned configuration # [main] summary=Optimize for server power savings [cpu] [disk] [scsi_host] alpm=min_power 070701000000B3000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/profiles/spectrumscale-ece070701000000B4000081A40000000000000000000000016391BC3A00000257000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/profiles/spectrumscale-ece/tuned.conf# # tuned configuration # [main] summary=Optimized for Spectrum Scale Erasure Code Edition Servers include=throughput-performance [cpu] governor=performance energy_perf_bias=performance min_perf_pct=100 [sysctl] kernel.numa_balancing = 1 vm.dirty_ratio = 40 vm.dirty_background_ratio = 10 vm.swappiness=10 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 [scheduler] sched_min_granularity_ns = 10000000 sched_wakeup_granularity_ns = 15000000 [disk-sas] type=disk devices = sd* elevator = mq-deadline readahead = 0 [disk-nvme] type=disk devices = nvme* elevator = none readahead = 0 070701000000B5000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/profiles/spindown-disk070701000000B6000081ED0000000000000000000000016391BC3A00000226000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/profiles/spindown-disk/script.sh#!/bin/sh . /usr/lib/tuned/functions EXT_PARTITIONS=$(mount | grep -E "type ext(3|4)" | cut -d" " -f1) start() { [ "$USB_AUTOSUSPEND" = 1 ] && enable_usb_autosuspend disable_bluetooth enable_wifi_powersave disable_logs_syncing remount_partitions commit=600,noatime $EXT_PARTITIONS sync return 0 } stop() { [ "$USB_AUTOSUSPEND" = 1 ] && disable_usb_autosuspend enable_bluetooth disable_wifi_powersave restore_logs_syncing remount_partitions commit=5 $EXT_PARTITIONS return 0 } process $@ 070701000000B7000081A40000000000000000000000016391BC3A00000372000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/profiles/spindown-disk/tuned.conf# # tuned configuration # # spindown-disk usecase: # Safe extra energy on your laptop or home server # which wake-up only when you ssh to it. On server # could be hdparm and sysctl values problematic for # some type of discs. Laptops should be probably ok # with these numbers. # # Possible problems: # The script is remounting your ext3 fs if you have # it as noatime. Also configuration of rsyslog is # changed to not sync. hdparm is setting disc to # minimal spins but without use of tuned daemon. # Bluetooth will be switch off. # Wifi will be switch into power safe mode. [main] summary=Optimize for power saving by spinning-down rotational disks [disk] apm=128 spindown=6 [scsi_host] alpm=medium_power [sysctl] vm.dirty_writeback_centisecs=6000 vm.dirty_expire_centisecs=9000 vm.dirty_ratio=60 vm.laptop_mode=5 vm.swappiness=30 [script] script=${i:PROFILE_DIR}/script.sh 070701000000B8000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/profiles/throughput-performance070701000000B9000081A40000000000000000000000016391BC3A000008A8000000000000000000000000000000000000004700000000tuned-2.19.0.29+git.b894a3e/profiles/throughput-performance/tuned.conf# # tuned configuration # [main] summary=Broadly applicable tuning that provides excellent performance across a variety of common server workloads [variables] thunderx_cpuinfo_regex=CPU part\s+:\s+(0x0?516)|(0x0?af)|(0x0?a[0-3])|(0x0?b8)\b amd_cpuinfo_regex=model name\s+:.*\bAMD\b [cpu] governor=performance energy_perf_bias=performance min_perf_pct=100 # Marvell ThunderX [vm.thunderx] type=vm uname_regex=aarch64 cpuinfo_regex=${thunderx_cpuinfo_regex} transparent_hugepages=never [disk] # The default unit for readahead is KiB. This can be adjusted to sectors # by specifying the relevant suffix, eg. (readahead => 8192 s). There must # be at least one space between the number and suffix (if suffix is specified). readahead=>4096 [sysctl] # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio = 40 # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio = 10 # PID allocation wrap value. When the kernel's next PID value # reaches this value, it wraps back to a minimum PID value. # PIDs of value pid_max or larger are not allocated. # # A suggested value for pid_max is 1024 * <# of cpu cores/threads in system> # e.g., a box with 32 cpus, the default of 32768 is reasonable, for 64 cpus, # 65536, for 4096 cpus, 4194304 (which is the upper limit possible). #kernel.pid_max = 65536 # The swappiness parameter controls the tendency of the kernel to move # processes out of physical memory and onto the swap disk. # 0 tells the kernel to avoid swapping processes out of physical memory # for as long as possible # 100 tells the kernel to aggressively swap processes out of physical memory # and move them to swap cache vm.swappiness=10 # Marvell ThunderX [sysctl.thunderx] type=sysctl uname_regex=aarch64 cpuinfo_regex=${thunderx_cpuinfo_regex} kernel.numa_balancing=0 070701000000BA000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/profiles/virtual-guest070701000000BB000081A40000000000000000000000016391BC3A000002ED000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/profiles/virtual-guest/tuned.conf# # tuned configuration # [main] summary=Optimize for running inside a virtual guest include=throughput-performance [sysctl] # If a workload mostly uses anonymous memory and it hits this limit, the entire # working set is buffered for I/O, and any more write buffering would require # swapping, so it's time to throttle writes until I/O can catch up. Workloads # that mostly use file mappings may be able to use even higher values. # # The generator of dirty data starts writeback at this percentage (system default # is 20%) vm.dirty_ratio = 30 # Filesystem I/O is usually much more efficient than swapping, so try to keep # swapping low. It's usually safe to go even lower than this on systems with # server-grade storage. vm.swappiness = 30 070701000000BC000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/profiles/virtual-host070701000000BD000081A40000000000000000000000016391BC3A0000014B000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/profiles/virtual-host/tuned.conf# # tuned configuration # [main] summary=Optimize for running KVM guests include=throughput-performance [sysctl] # Start background writeback (via writeback threads) at this percentage (system # default is 10%) vm.dirty_background_ratio = 5 [cpu] # Setting C3 state sleep mode/power savings force_latency=cstate.id_no_zero:3|70 070701000000BE000081A40000000000000000000000016391BC3A00000725000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/recommend.conf# TuneD rules for recommend_profile. # # Syntax: # [PROFILE1] # KEYWORD11=RE11 # KEYWORD21=RE12 # # [PROFILE2] # KEYWORD21=RE21 # KEYWORD22=RE22 # KEYWORD can be: # virt - for RE to match output of virt-what # system - for RE to match content of /etc/system-release-cpe # process - for RE to match running processes. It can have arbitrary # suffix, all process* lines have to match for the PROFILE # to match (i.e. the AND operator) # /FILE - for RE to match content of the FILE, e.g.: # '/etc/passwd=.+'. If file doesn't exist, its RE will not # match. # chassis_type - for RE to match the chassis type as reported by dmidecode # syspurpose_role - for RE to match the system role as reported by syspurpose # All REs for all KEYWORDs have to match for PROFILE to match (i.e. the AND operator). # If 'virt' or 'system' is not specified, it matches for every string. # If 'virt' or 'system' is empty, i.e. 'virt=', it matches only empty string (alias for '^$'). # If several profiles matched, the first match is taken. # # Limitation: # Each profile can be specified only once, because there cannot be # multiple sections in the configuration file with the same name # (ConfigParser limitation). # If there is a need to specify the profile multiple times, unique # suffix like ',ANYSTRING' can be used. Everything after the last ',' # is stripped by the parser, e.g.: # # [balanced,1] # /FILE1=RE1 # # [balanced,2] # /FILE2=RE2 # # This will set 'balanced' profile in case there is FILE1 matching RE1 or # FILE2 matching RE2 or both. [atomic-host] virt= system=.*atomic.* [atomic-guest] virt=.+ system=.*atomic.* [throughput-performance] virt= system=.*(computenode|server).* [virtual-guest] virt=.+ [balanced] 070701000000BF000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002600000000tuned-2.19.0.29+git.b894a3e/systemtap070701000000C0000081ED0000000000000000000000016391BC3A00001093000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/systemtap/diskdevstat#!/usr/bin/stap # # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Phil Knirsch # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # global ifavg, iflast, rtime, interval, duration, histogram function help() { print( "A simple systemtap script to record harddisk activity of processes and display\n" "statistics for read/write operations.\n\n" "usage: diskdevstat [update-interval] [total-duration] [display-histogram]\n\n" " update-interval in seconds, the default is 5\n" " total-duration in seconds, the default is 86400\n" " display-histogram anything, unset by default\n" ); } probe begin { rtime = 0; interval = 5; duration = 86400; histogram = 0; %( $# > 0 %? interval = strtol(@1,10); %) %( $# > 1 %? duration = strtol(@2,10); %) %( $# > 2 %? histogram = 1; %) if ($# >= 4 || interval <= 0 || duration <= 0) { help(); exit(); } } probe vfs.write { if(pid() == 0 || devname == "N/A") { next; } ms = gettimeofday_ms(); if(iflast[0, pid(), devname, execname(), uid()] == 0) { iflast[0, pid(), devname, execname(), uid()] = ms; } else { diff = ms - iflast[0, pid(), devname, execname(), uid()]; iflast[0, pid(), devname, execname(), uid()] = ms; ifavg[0, pid(), devname, execname(), uid()] <<< diff; } } probe vfs.read { if(pid() == 0 || devname == "N/A") { next; } ms = gettimeofday_ms(); if(iflast[1, pid(), devname, execname(), uid()] == 0) { iflast[1, pid(), devname, execname(), uid()] = ms; } else { diff = ms - iflast[1, pid(), devname, execname(), uid()]; iflast[1, pid(), devname, execname(), uid()] = ms; ifavg[1, pid(), devname, execname(), uid()] <<< diff; } } function print_activity() { printf("\033[2J\033[1;1H") printf("%5s %5s %-7s %9s %9s %9s %9s %9s %9s %9s %9s %-15s\n", "PID", "UID", "DEV", "WRITE_CNT", "WRITE_MIN", "WRITE_MAX", "WRITE_AVG", "READ_CNT", "READ_MIN", "READ_MAX", "READ_AVG", "COMMAND") foreach ([type, pid, dev, exec, uid] in ifavg-) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) write_min = nxmit ? @min(ifavg[0, pid, dev, exec, uid]) : 0 write_max = nxmit ? @max(ifavg[0, pid, dev, exec, uid]) : 0 write_avg = nxmit ? @avg(ifavg[0, pid, dev, exec, uid]) : 0 read_min = nrecv ? @min(ifavg[1, pid, dev, exec, uid]) : 0 read_max = nrecv ? @max(ifavg[1, pid, dev, exec, uid]) : 0 read_avg = nrecv ? @avg(ifavg[1, pid, dev, exec, uid]) : 0 if(type == 0 || nxmit == 0) { printf("%5d %5d %-7s %9d %5d.%03d %5d.%03d %5d.%03d", pid, uid, dev, nxmit, write_min/1000, write_min%1000, write_max/1000, write_max%1000, write_avg/1000, write_avg%1000) printf(" %9d %5d.%03d %5d.%03d %5d.%03d %-15s\n", nrecv, read_min/1000, read_min%1000, read_max/1000, read_max%1000, read_avg/1000, read_avg%1000, exec) } } print("\n") } function print_histogram() { foreach ([type, pid, dev, exec, uid] in ifavg-) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) if (type == 0 || nxmit == 0) { printf("%5d %5d %-7s %-15s\n", pid, uid, dev, exec) if (nxmit > 0) { printf(" WRITE histogram\n") print(@hist_log(ifavg[0, pid, dev, exec, uid])) } if (nrecv > 0) { printf(" READ histogram\n") print(@hist_log(ifavg[1, pid, dev, exec, uid])) } } } } probe timer.s(1) { rtime = rtime + 1; if (rtime % interval == 0) { print_activity() } if (rtime >= duration) { exit(); } } probe end, error { if (histogram == 1) { print_histogram(); } exit(); } 070701000000C1000081ED0000000000000000000000016391BC3A00001075000000000000000000000000000000000000003100000000tuned-2.19.0.29+git.b894a3e/systemtap/netdevstat#!/usr/bin/stap # # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Phil Knirsch # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # global ifavg, iflast, rtime, interval, duration, histogram function help() { print( "A simple systemtap script to record network activity of processes and display\n" "statistics for transmit/receive operations.\n\n" "usage: netdevstat [update-interval] [total-duration] [display-histogram]\n\n" " update-interval in seconds, the default is 5\n" " total-duration in seconds, the default is 86400\n" " display-histogram anything, unset by default\n" ); } probe begin { rtime = 0; interval = 5; duration = 86400; histogram = 0; %( $# > 0 %? interval = strtol(@1,10); %) %( $# > 1 %? duration = strtol(@2,10); %) %( $# > 2 %? histogram = 1; %) if ($# >= 4 || interval <= 0 || duration <= 0) { help(); exit(); } } probe netdev.transmit { if(pid() == 0) { next; } ms = gettimeofday_ms(); if(iflast[0, pid(), dev_name, execname(), uid()] == 0) { iflast[0, pid(), dev_name, execname(), uid()] = ms; } else { diff = ms - iflast[0, pid(), dev_name, execname(), uid()]; iflast[0, pid(), dev_name, execname(), uid()] = ms; ifavg[0, pid(), dev_name, execname(), uid()] <<< diff; } } probe netdev.receive { if(pid() == 0) { next; } ms = gettimeofday_ms(); if(iflast[1, pid(), dev_name, execname(), uid()] == 0) { iflast[1, pid(), dev_name, execname(), uid()] = ms; } else { diff = ms - iflast[1, pid(), dev_name, execname(), uid()]; iflast[1, pid(), dev_name, execname(), uid()] = ms; ifavg[1, pid(), dev_name, execname(), uid()] <<< diff; } } function print_activity() { printf("\033[2J\033[1;1H") printf("%5s %5s %-7s %9s %9s %9s %9s %9s %9s %9s %9s %-15s\n", "PID", "UID", "DEV", "XMIT_CNT", "XMIT_MIN", "XMIT_MAX", "XMIT_AVG", "RECV_CNT", "RECV_MIN", "RECV_MAX", "RECV_AVG", "COMMAND") foreach ([type, pid, dev, exec, uid] in ifavg-) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) xmit_min = nxmit ? @min(ifavg[0, pid, dev, exec, uid]) : 0 xmit_max = nxmit ? @max(ifavg[0, pid, dev, exec, uid]) : 0 xmit_avg = nxmit ? @avg(ifavg[0, pid, dev, exec, uid]) : 0 recv_min = nrecv ? @min(ifavg[1, pid, dev, exec, uid]) : 0 recv_max = nrecv ? @max(ifavg[1, pid, dev, exec, uid]) : 0 recv_avg = nrecv ? @avg(ifavg[1, pid, dev, exec, uid]) : 0 if(type == 0 || nxmit == 0) { printf("%5d %5d %-7s %9d %5d.%03d %5d.%03d %5d.%03d ", pid, uid, dev, nxmit, xmit_min/1000, xmit_min%1000, xmit_max/1000, xmit_max%1000, xmit_avg/1000, xmit_avg%1000) printf("%9d %5d.%03d %5d.%03d %5d.%03d %-15s\n", nrecv, recv_min/1000, recv_min%1000, recv_max/1000, recv_max%1000, recv_avg/1000, recv_avg%1000, exec) } } print("\n") } function print_histogram() { foreach ([type+, pid, dev, exec, uid] in ifavg) { nxmit = @count(ifavg[0, pid, dev, exec, uid]) nrecv = @count(ifavg[1, pid, dev, exec, uid]) if (type == 0 || nxmit == 0) { printf("%5d %5d %-7s %-15s\n", pid, uid, dev, exec) if (nxmit > 0) { printf(" WRITE histogram\n") print(@hist_log(ifavg[0, pid, dev, exec, uid])) } if (nrecv > 0) { printf(" READ histogram\n") print(@hist_log(ifavg[1, pid, dev, exec, uid])) } } } } probe timer.s(1) { rtime = rtime + 1; if (rtime % interval == 0) { print_activity() } if (rtime >= duration) { exit(); } } probe end, error { if (histogram == 1) { print_histogram(); } exit(); } 070701000000C2000081ED0000000000000000000000016391BC3A00001B30000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/systemtap/scomes#!/usr/bin/stap # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # # =================================================================== # Do this when we have started global report_period = 0 # For watch_forked variable: # 0 = watch only main thread # 1 = watch forked with same execname as well # 2 = watch all forked processes global watch_forked = 2 probe begin { if ($# == 1) { report_period = $1 } print("Collecting data...\n") printf("... for pid %d - %s\n", target(), pid2execname(target())) } # =================================================================== # Define helper function for printing results function compute_score() { # new empirical formula that was proposed by Vratislav Podzimek <vpodzime@redhat.com> in his bachelor thesis return syscalls + (poll_timeout + epoll_timeout + select_timeout + itimer_timeout + nanosleep_timeout + futex_timeout + signal_timeout) * 4 + (reads + writes) / 5000 + (ifxmit + ifrecv) * 25 } function print_status() { printf("-----------------------------------\n") ###target_set_report() printf("Monitored process: %d (%s)\n", target(), pid2execname(target())) printf("Number of syscalls: %d\n", syscalls) printf("Kernel/Userspace ticks: %d/%d (%d)\n", kticks, uticks, kticks+uticks) printf("Read/Written bytes (from/to devices): %d/%d (%d)\n", reads, writes, reads+writes) printf("Read/Written bytes (from/to N/A device): %d/%d (%d)\n", reads_c, writes_c, reads_c+writes_c) printf("Transmitted/Recived bytes: %d/%d (%d)\n", ifxmit, ifrecv, ifxmit+ifrecv) printf("Polling syscalls: %d\n", poll_timeout+epoll_timeout+select_timeout+itimer_timeout+nanosleep_timeout+futex_timeout+signal_timeout) printf("SCORE: %d\n", compute_score()) } # =================================================================== # Define helper function for comparing if this is relevant pid # and for watching if our watched pid forked # ... from http://sourceware.org/systemtap/wiki/systemtapstarters global PIDS = 1 # as target() is already running function is_watched(p) { if ( (watch_forked == 0 && p == target()) || (watch_forked == 1 && target_set_pid(p) && pid2execname(target()) == pid2execname(p)) || (watch_forked == 2 && target_set_pid(p)) ) { #printf("Process %d is relevant to process %d\n", p, target()) return 1 # yes, we are watching this pid } else { return 0 # no, we are not watching this pid } } # Add a relevant forked process to the list of watched processes probe kernel.function("do_fork") { #printf("Fork of %d (%s) detected\n", pid(), execname()) if (is_watched(pid())) { #printf("Proces %d forked\n", pid()) PIDS = PIDS + 1 #printf("Currently watching %d pids (1 just added)\n", PIDS) } } # Remove pid from the list of watched pids and print report when # all relevant processes ends probe syscall.exit { if (is_watched(pid())) { #printf("Removing process %d\n", pid()) PIDS = PIDS - 1 } #printf("Currently watching %d pids (1 just removed)\n", PIDS) if (PIDS == 0) { printf("-----------------------------------\n") printf("LAST RESULTS:\n") print_status() exit() } } # =================================================================== # Check all syscalls # ... from syscalls_by_pid.stp global syscalls probe syscall.* { if (is_watched(pid())) { syscalls++ #printf ("%s(%d) syscall %s\n", execname(), pid(), name) } } # =================================================================== # Check read/written bytes # ... from disktop.stp global reads, writes, reads_c, writes_c probe vfs.read.return { if (is_watched(pid()) && $return>0) { if (devname!="N/A") { reads += $return } else { reads_c += $return } } } probe vfs.write.return { if (is_watched(pid()) && $return>0) { if (devname!="N/A") { writes += $return } else { writes_c += $return } } } # =================================================================== # Check kernel and userspace CPU ticks # ... from thread-times.stp global kticks, uticks probe timer.profile { if (is_watched(pid())) { if (!user_mode()) kticks++ else uticks++ } } # =================================================================== # Check polling # ... from timeout.stp global poll_timeout, epoll_timeout, select_timeout, itimer_timeout global nanosleep_timeout, futex_timeout, signal_timeout global to probe syscall.poll, syscall.epoll_wait { if (timeout) to[pid()]=timeout } probe syscall.poll.return { if ($return == 0 && is_watched(pid()) && to[pid()] > 0) { poll_timeout++ delete to[pid()] } } probe syscall.epoll_wait.return { if ($return == 0 && is_watched(pid()) && to[pid()] > 0) { epoll_timeout++ delete to[pid()] } } probe syscall.select.return { if ($return == 0 && is_watched(pid())) { select_timeout++ } } probe syscall.futex.return { if ($return == 0 && is_watched(pid()) && errno_str($return) == "ETIMEDOUT") { futex_timeout++ } } probe syscall.nanosleep.return { if ($return == 0 && is_watched(pid())) { nanosleep_timeout++ } } probe kernel.function("it_real_fn") { if (is_watched(pid())) { itimer_timeout++ } } probe syscall.rt_sigtimedwait.return { if (is_watched(pid()) && errno_str($return) == "EAGAIN") { signal_timeout++ } } # =================================================================== # Check network traffic # ... from nettop.stp global ifxmit, ifrecv probe netdev.transmit { if (is_watched(pid()) && dev_name!="lo") { ifxmit += length } } probe netdev.receive { if (is_watched(pid()) && dev_name!="lo") { ifrecv += length } } # =================================================================== # Print report each X seconds global counter probe timer.s(1) { if (report_period != 0) { counter++ if (counter == report_period) { print_status() counter = 0 } } } # =================================================================== # Print quit message probe end { printf("-----------------------------------\n") printf("LAST RESULTS:\n") print_status() printf("-----------------------------------\n") printf("QUITTING\n") printf("-----------------------------------\n") } 070701000000C3000081ED0000000000000000000000016391BC3A00000B5E000000000000000000000000000000000000003100000000tuned-2.19.0.29+git.b894a3e/systemtap/varnetload#!/usr/bin/python3 -Es # # varnetload: A python script to create reproducable sustained network traffic # Copyright (C) 2008-2013 Red Hat, Inc. # Authors: Phil Knirsch # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # # # Usage: varnetload [-d delay in milliseconds] [-t runtime in seconds] [-u URL] # # Howto use it: # - In order to effectively use it you need to have a http server in your local # LAN where you can put files. # - Upload a large HTML (or any other kind of file) to the http server. # - Use the -u option of the script to point to that URL # - Play with the delay option to vary the load put on your network. Typical # useful values range from 0 to 500. # from __future__ import print_function import time, getopt, sys # exception handler for python 2/3 compatibility try: from urllib.request import * from urllib.error import * except ImportError: from urllib2 import * def usage(): print("Usage: varnetload [-d delay in milliseconds] [-t runtime in seconds] [-u URL]") delay = 1000.0 rtime = 60.0 url = "http://myhost.mydomain/index.html" try: opts, args = getopt.getopt(sys.argv[1:], "d:t:u:") except getopt.error as e: print("Error parsing command-line arguments: %s" % e) usage() sys.exit(1) for (opt, val) in opts: if opt == '-d': delay = float(val) elif opt == '-t': rtime = float(val) elif opt == '-u': url = val else: print("Unknown option: %s" % opt) usage() sys.exit(1) endtime = time.time() + rtime delay = float(delay)/1000.0 try: count = 0 while(time.time() < endtime): if (delay < 0.01): urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) urlopen(url).read(409600) time.sleep(delay) count += 9 urlopen(url).read(409600) time.sleep(delay) count += 1 except URLError as e: print("Downloading failed: %s" % e.reason) sys.exit(2) print("Finished varnetload. Received %d pages in %d seconds" % (count, rtime)) 070701000000C4000041ED0000000000000000000000046391BC3A00000000000000000000000000000000000000000000002200000000tuned-2.19.0.29+git.b894a3e/tests070701000000C5000041ED0000000000000000000000086391BC3A00000000000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib070701000000C6000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000005A00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/Program-tuned-tried-to-access-dev-mem-between070701000000C7000081A40000000000000000000000016391BC3A00000186000000000000000000000000000000000000006300000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/Program-tuned-tried-to-access-dev-mem-between/main.fmfcomponent: tuned contact: Robin Hack <rhack@redhat.com> description: | Bug summary: Program tuned tried to access /dev/mem between f0000->100000. Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1688371 duration: 20m relevancy: | distro = rhel-4, rhel-5, rhel-6: False summary: Test for BZ#1688371 (Program tuned tried to access /dev/mem between) framework: beakerlib 070701000000C8000081ED0000000000000000000000016391BC3A00000547000000000000000000000000000000000000006500000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/Program-tuned-tried-to-access-dev-mem-between/runtest.sh#!/bin/bash # vim: dict+=/usr/share/beakerlib/dictionary.vim cpt=.,w,b,u,t,i,k # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # runtest.sh of /CoreOS/tuned/Regression/Program-tuned-tried-to-access-dev-mem-between # Description: Test for BZ#1688371 (Program tuned tried to access /dev/mem between) # Author: Robin Hack <rhack@redhat.com> # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Copyright Red Hat # # SPDX-License-Identifier: GPL-2.0-or-later WITH GPL-CC-1.0 # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Include Beaker environment . /usr/share/beakerlib/beakerlib.sh || exit 1 PACKAGE="tuned" rlJournalStart rlPhaseStartSetup rlAssertRpm $PACKAGE rlRun "TmpDir=\$(mktemp -d)" 0 "Creating tmp directory" rlRun "pushd $TmpDir" rlServiceStop "tuned" # systemd can have some issues with quick restarts sometimes sleep 1 rlServiceStart "tuned" rlPhaseEnd rlPhaseStartTest rlRun "dmesg | tee TEST_OUT" rlAssertNotGrep "Program tuned tried to access /dev/mem" "TEST_OUT" rlPhaseEnd rlPhaseStartCleanup rlRun "popd" rlRun "rm -r $TmpDir" 0 "Removing tmp directory" rlServiceRestore "tuned" rlPhaseEnd rlJournalPrintText rlJournalEnd 070701000000C9000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000005D00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/Tuned-takes-too-long-to-reload-start-when-ulimit070701000000CA000081A40000000000000000000000016391BC3A0000018B000000000000000000000000000000000000006600000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/Tuned-takes-too-long-to-reload-start-when-ulimit/main.fmfcomponent: tuned contact: Robin Hack <rhack@redhat.com> description: | Bug summary: TuneD takes too long to reload/start when "ulimit -n" is high. Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1663412 duration: 20m relevancy: | distro = rhel-4, rhel-5, rhel-6: False summary: Test for BZ#1663412 (TuneD takes too long to reload/start when \"ulimit) framework: beakerlib 070701000000CB000081ED0000000000000000000000016391BC3A0000063C000000000000000000000000000000000000006800000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/Tuned-takes-too-long-to-reload-start-when-ulimit/runtest.sh#!/bin/bash # vim: dict+=/usr/share/beakerlib/dictionary.vim cpt=.,w,b,u,t,i,k # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # runtest.sh of /CoreOS/tuned/Regression/Tuned-takes-too-long-to-reload-start-when-ulimit # Description: Test for BZ#1663412 (TuneD takes too long to reload/start when "ulimit) # Author: Robin Hack <rhack@redhat.com> # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Copyright Red Hat # # SPDX-License-Identifier: GPL-2.0-or-later WITH GPL-CC-1.0 # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Include Beaker environment . /usr/share/beakerlib/beakerlib.sh || exit 1 PACKAGE="tuned" rlJournalStart rlPhaseStartSetup rlAssertRpm $PACKAGE rlRun "TmpDir=\$(mktemp -d)" 0 "Creating tmp directory" rlRun "pushd $TmpDir" rlServiceStop "tuned" rlFileBackup "/etc/tuned/tuned-main.conf" rlPhaseEnd rlPhaseStartTest rlRun "sed 's/daemon = 1/daemon = 0/g' /etc/tuned/tuned-main.conf > /etc/tuned/tuned-main.conf.new" rlRun "mv /etc/tuned/tuned-main.conf.new /etc/tuned/tuned-main.conf" rlRun "ulimit -H -n 1048576" rlRun "ulimit -S -n 1048576" rlRun "tuned --debug 2>&1 | tee TEST_OUT" rlAssertNotGrep "tuned.plugins.plugin_sysctl: executing \['sysctl'," TEST_OUT rlPhaseEnd rlPhaseStartCleanup rlRun "popd" rlRun "rm -r $TmpDir" 0 "Removing tmp directory" rlFileRestore rlServiceRestore "tuned" rlPhaseEnd rlJournalPrintText rlJournalEnd 070701000000CC000041ED0000000000000000000000096391BC3A00000000000000000000000000000000000000000000005600000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile070701000000CD000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000006200000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/conflicting070701000000CE000081A40000000000000000000000016391BC3A00000056000000000000000000000000000000000000006D00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/conflicting/tuned.conf[main] summary=Profile conflicting with the parent profile [sysctl] vm.swappiness=10 070701000000CF000081A40000000000000000000000016391BC3A0000018E000000000000000000000000000000000000005F00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/main.fmfsummary: Test for BZ#1798183 (RFE support post-loaded profile) description: | Bug summary: RFE: support post-loaded profile Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1798183 contact: - Ondřej Lysoněk <olysonek@redhat.com> component: - tuned require: - tuned duration: 5m extra-task: /CoreOS/tuned/Regression/bz1798183-RFE-support-post-loaded-profile framework: beakerlib 070701000000D0000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000005D00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/parent070701000000D1000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000006200000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/parent-vars070701000000D2000081A40000000000000000000000016391BC3A00000069000000000000000000000000000000000000006D00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/parent-vars/tuned.conf[main] summary=Parent profile that defines a variable [variables] foo=12 [sysctl] vm.swappiness=${foo} 070701000000D3000081A40000000000000000000000016391BC3A00000039000000000000000000000000000000000000006800000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/parent/tuned.conf[main] summary=Parent profile [sysctl] vm.swappiness=20 070701000000D4000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000005E00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/parent2070701000000D5000081A40000000000000000000000016391BC3A00000043000000000000000000000000000000000000006900000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/parent2/tuned.conf[main] summary=Different parent profile [sysctl] vm.swappiness=30 070701000000D6000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000005B00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/post070701000000D7000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000006000000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/post-vars070701000000D8000081A40000000000000000000000016391BC3A00000077000000000000000000000000000000000000006B00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/post-vars/tuned.conf[main] summary=Post-loaded profile that uses variables from the regular active profile [sysctl] vm.dirty_ratio=${foo} 070701000000D9000081A40000000000000000000000016391BC3A0000003E000000000000000000000000000000000000006600000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/post/tuned.conf[main] summary=Post-loaded profile [sysctl] vm.dirty_ratio=8 070701000000DA000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000005C00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/post2070701000000DB000081A40000000000000000000000016391BC3A00000054000000000000000000000000000000000000006700000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/post2/tuned.conf[main] summary=Second version of the post-loaded profile [sysctl] vm.dirty_ratio=7 070701000000DC000081ED0000000000000000000000016391BC3A00002DDB000000000000000000000000000000000000006100000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/runtest.sh#!/bin/bash # vim: dict+=/usr/share/beakerlib/dictionary.vim cpt=.,w,b,u,t,i,k # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # runtest.sh of /CoreOS/tuned/Regression/bz1798183-RFE-support-post-loaded-profile # Description: Test for BZ#1798183 (RFE support post-loaded profile) # Author: Ondřej Lysoněk <olysonek@redhat.com> # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Copyright Red Hat # # SPDX-License-Identifier: GPL-2.0-or-later WITH GPL-CC-1.0 # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Include Beaker environment . /usr/share/beakerlib/beakerlib.sh || exit 1 PACKAGE="tuned" PROFILE_DIR=/etc/tuned ACTIVE_PROFILE=/etc/tuned/active_profile PROFILE_MODE=/etc/tuned/profile_mode POST_LOADED_PROFILE=/etc/tuned/post_loaded_profile SWAPPINESS=vm.swappiness DIRTY_RATIO=vm.dirty_ratio PID_FILE=/run/tuned/tuned.pid SERVICE_OVERRIDE_DIR=/etc/systemd/system/tuned.service.d PYTHON_CHECK="python3 /usr/libexec/platform-python python2 python" PYTHON=python3 function wait_for_tuned() { local timeout=$1 local elapsed=0 while ! $PYTHON -c 'import dbus; bus = dbus.SystemBus(); exit(0 if bus.name_has_owner("com.redhat.tuned") else 1)'; do sleep 1 elapsed=$(($elapsed + 1)) if [ "$elapsed" -ge "$timeout" ]; then return 1 fi done return 0 } function wait_for_tuned_stop() { local timeout=$1 local elapsed=0 while test -f "$PID_FILE"; do sleep 1 elapsed=$(($elapsed + 1)) if [ "$elapsed" -ge "$timeout" ]; then return 1 fi done return 0 } rlJournalStart rlPhaseStartSetup rlAssertRpm $PACKAGE rlRun "for PYTHON in $PYTHON_CHECK; do \$PYTHON --version 2>/dev/null && break; done" 0 "Detect python" rlRun "rlFileBackup --clean $PROFILE_DIR" rlRun "cp -r parent $PROFILE_DIR" rlRun "cp -r parent2 $PROFILE_DIR" rlRun "cp -r parent-vars $PROFILE_DIR" rlRun "cp -r post $PROFILE_DIR" rlRun "cp -r post2 $PROFILE_DIR" rlRun "cp -r post-vars $PROFILE_DIR" rlRun "cp -r conflicting $PROFILE_DIR" rlRun "TmpDir=\$(mktemp -d)" 0 "Creating tmp directory" rlRun "cp wait_for_signal.py $TmpDir" rlRun "pushd $TmpDir" rlRun "rlFileBackup --clean $SERVICE_OVERRIDE_DIR" rlRun "mkdir -p $SERVICE_OVERRIDE_DIR" rlRun "echo -e '[Service]\\nStartLimitBurst=0' > $SERVICE_OVERRIDE_DIR/limit.conf" rlRun "systemctl daemon-reload" rlRun "rlServiceStop tuned" SWAPPINESS_BACKUP=$(sysctl -n $SWAPPINESS) DIRTY_RATIO_BACKUP=$(sysctl -n $DIRTY_RATIO) rlRun "rlServiceStart tuned" rlPhaseEnd rlPhaseStartTest "Check that settings from the post-loaded profile are applied" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 20 rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 8 rlPhaseEnd rlPhaseStartTest "Check that the post-loaded profile name gets reloaded when HUP is received" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" rlRun "rlServiceStart tuned" rlRun "echo parent2 > $ACTIVE_PROFILE" rlRun "echo post2 > $POST_LOADED_PROFILE" timeout 25s $PYTHON ./wait_for_signal.py & pid=$! # Give the wait_for_signal script a chance to connect to the bus sleep 1 rlRun "kill -HUP '$(< $PID_FILE)'" 0 "Send HUP to TuneD" rlRun "wait $pid" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 30 rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 7 rlPhaseEnd rlPhaseStartTest "Check that 'tuned-adm profile' does not cause TuneD to touch the post-loaded profile" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" # Change the active profile. After this, the profile 'post' must remain applied. rlRun "tuned-adm profile parent2" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 30 rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 8 rlPhaseEnd rlPhaseStartTest "Check that settings from the post-loaded profile take precedence" rlRun "tuned-adm profile parent" rlRun "echo conflicting > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 10 rlPhaseEnd rlPhaseStartTest "Check that conflicts in the post-loaded profile do not cause verification to fail" rlRun "tuned-adm profile parent" rlRun "echo conflicting > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlRun "tuned-adm verify" rlPhaseEnd rlPhaseStartTest "Check that 'tuned-adm off' causes TuneD to clear the post-loaded profile" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlRun "tuned-adm off" rlAssertEquals "Check the output of tuned-adm active" \ "$(tuned-adm active)" \ "No current active profile." rlAssertEquals "Check that swappiness has not been changed" \ "$(sysctl -n $SWAPPINESS)" "$SWAPPINESS_BACKUP" rlAssertEquals "Check that dirty ratio has not been changed" \ "$(sysctl -n $DIRTY_RATIO)" "$DIRTY_RATIO_BACKUP" rlPhaseEnd rlPhaseStartTest "Check that the post-loaded profile is applied even if active_profile is empty" rlRun "> $ACTIVE_PROFILE" rlRun "echo manual > $PROFILE_MODE" rlRun "echo post > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlAssertEquals "Check the output of tuned-adm active" \ "$(tuned-adm active)" \ "Current active profile: post"$'\n'"Current post-loaded profile: post" rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 8 rlPhaseEnd rlPhaseStartTest "Check that the post-loaded profile is listed among active profiles by 'tuned-adm active'" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlAssertEquals "Check the output of tuned-adm active" \ "$(tuned-adm active | grep 'Current active profile')" \ "Current active profile: parent post" rlPhaseEnd rlPhaseStartTest "Check that 'tuned -p <profile_name>' applies the post-loaded profile" rlRun "rlServiceStop tuned" rlRun "> $ACTIVE_PROFILE" rlRun "echo post > $POST_LOADED_PROFILE" rlRun "tuned -p parent &" rlRun "wait_for_tuned 15" 0 "Wait for the profile to become applied" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 20 rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 8 rlAssertEquals "Check the output of tuned-adm active" \ "$(tuned-adm active | grep 'Current active profile')" \ "Current active profile: parent post" rlRun "kill '$(< $PID_FILE)'" 0 "Kill TuneD" rlRun "wait_for_tuned_stop 15" 0 "Wait for TuneD to exit" rlPhaseEnd rlPhaseStartTest "Check that the DBus signal 'profile_changed' contains only the active_profile" rlRun "rlServiceStop tuned" rlRun "echo parent > $ACTIVE_PROFILE" rlRun "echo post > $POST_LOADED_PROFILE" timeout 25s $PYTHON ./wait_for_signal.py > output & pid=$! # If the 'wait $pid' command below fails but everything else # in this phase succeeds, try adding a sleep here. rlRun "rlServiceStart tuned" rlRun "wait $pid" rlAssertEquals "Check the profiles listed in the signal" \ "$(< output)" \ "parent" timeout 25s $PYTHON ./wait_for_signal.py > output & pid=$! rlRun "tuned-adm profile parent" rlRun "wait $pid" rlAssertEquals "Check the profiles listed in the signal" \ "$(< output)" \ "parent" rlPhaseEnd rlPhaseStartTest "Check that 'tuned-adm profile' does not cause TuneD to reload the post-loaded profile name from disk" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" rlRun "rlServiceStart tuned" rlRun "echo post2 > $POST_LOADED_PROFILE" # Change the active profile. After this, the profile 'post' must remain applied. rlRun "tuned-adm profile parent2" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 30 rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 8 rlAssertEquals "Check the output of tuned-adm active" \ "$(tuned-adm active | grep 'Current active profile')" \ "Current active profile: parent2 post" rlPhaseEnd rlPhaseStartTest "Check that 'tuned-adm profile' overwrites the post-loaded profile on disk" rlRun "tuned-adm profile parent" rlRun "echo post > $POST_LOADED_PROFILE" rlRun "rlServiceStart tuned" rlRun "echo post2 > $POST_LOADED_PROFILE" rlRun "tuned-adm profile parent" rlAssertEquals "Check the post-loaded profile name on disk" \ "$(< $POST_LOADED_PROFILE)" \ "post" rlPhaseEnd rlPhaseStartTest "Check that variables are shared among the active_profile and the post-loaded profile" rlRun "tuned-adm profile parent-vars" rlRun "echo post-vars > $POST_LOADED_PROFILE" # Restart TuneD so that the post-loaded profile gets applied rlRun "rlServiceStart tuned" rlAssertEquals "Check that swappiness is set correctly" \ "$(sysctl -n $SWAPPINESS)" 12 rlAssertEquals "Check that dirty ratio is set correctly" \ "$(sysctl -n $DIRTY_RATIO)" 12 rlPhaseEnd rlPhaseStartCleanup rlRun "popd" rlRun "rm -r $TmpDir" 0 "Removing tmp directory" rlRun "rlFileRestore" rlRun "systemctl daemon-reload" rlRun "sysctl $SWAPPINESS=$SWAPPINESS_BACKUP" rlRun "sysctl $DIRTY_RATIO=$DIRTY_RATIO_BACKUP" rlRun "rlServiceRestore tuned" rlPhaseEnd rlJournalPrintText rlJournalEnd 070701000000DD000081A40000000000000000000000016391BC3A00000167000000000000000000000000000000000000006900000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/bz1798183-RFE-support-post-loaded-profile/wait_for_signal.pyfrom dbus.mainloop.glib import DBusGMainLoop import dbus from gi.repository import GLib def handler(profiles, res, err): print(profiles) loop.quit() DBusGMainLoop(set_as_default=True) loop = GLib.MainLoop() bus=dbus.SystemBus() bus.add_signal_receiver(handler, "profile_changed", "com.redhat.tuned.control", "com.redhat.tuned", "/Tuned") loop.run() 070701000000DE000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/error-messages070701000000DF000081A40000000000000000000000016391BC3A00000198000000000000000000000000000000000000004400000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/error-messages/main.fmfcomponent: tuned contact: Robin Hack <rhack@redhat.com> description: | Bug summary: TuneD logs error message if '/sys/class/scsi_host/host*/link_power_management_policy' file does not exists. Bugzilla link: https://bugzilla.redhat.com/show_bug.cgi?id=1416712 duration: 5m relevancy: | distro = rhel-4, rhel-5: False summary: Test for BZ#1416712 (TuneD logs error message if) framework: beakerlib 070701000000E0000081ED0000000000000000000000016391BC3A0000078C000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/error-messages/runtest.sh#!/bin/bash # vim: dict+=/usr/share/beakerlib/dictionary.vim cpt=.,w,b,u,t,i,k # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # runtest.sh of /CoreOS/tuned/Regression/bz1416712-Tuned-logs-error-message-if # Description: Test for BZ#1416712 (TuneD logs error message if) # Author: Tereza Cerna <tcerna@redhat.com> # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Copyright Red Hat # # SPDX-License-Identifier: GPL-2.0-or-later WITH GPL-CC-1.0 # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Include Beaker environment . /usr/share/beakerlib/beakerlib.sh || exit 1 PACKAGE="tuned" rlJournalStart rlPhaseStartSetup rlAssertRpm $PACKAGE rlImport "tuned/basic" rlServiceStart "tuned" tunedProfileBackup rlPhaseEnd rlPhaseStartTest "Test of profile balanced" rlRun "cat /usr/lib/tuned/balanced/tuned.conf | grep alpm=" echo > /var/log/tuned/tuned.log rlRun "tuned-adm profile balanced" rlRun "tuned-adm active | grep balanced" rlRun "cat /var/log/tuned/tuned.log | grep -v 'ERROR tuned.utils.commands: Reading /sys/class/scsi_host/host0/link_power_management_policy'" rlRun "cat /var/log/tuned/tuned.log | grep -v 'WARNING tuned.plugins.plugin_scsi_host: ALPM control file'" rlPhaseEnd rlPhaseStartTest "Test of profile powersave" rlRun "cat /usr/lib/tuned/powersave/tuned.conf | grep alpm=" echo > /var/log/tuned/tuned.log rlRun "tuned-adm profile powersave" rlRun "tuned-adm active | grep powersave" rlRun "cat /var/log/tuned/tuned.log | grep -v 'ERROR tuned.utils.commands: Reading /sys/class/scsi_host/host0/link_power_management_policy'" rlRun "cat /var/log/tuned/tuned.log | grep -v 'WARNING tuned.plugins.plugin_scsi_host: ALPM control file'" rlPhaseEnd rlPhaseStartCleanup tunedProfileRestore rlPhaseEnd rlJournalPrintText rlJournalEnd 070701000000E1000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000004400000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/tuned-adm-functionality070701000000E2000081A40000000000000000000000016391BC3A00000107000000000000000000000000000000000000004D00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/tuned-adm-functionality/main.fmfcomponent: tuned contact: Robin Hack <rhack@redhat.com> description: | Basic tuned-adm functionality check. duration: 5m relevancy: | distro = rhel-6: False distro < rhel-7.3: False summary: Check functionality of tuned-adm tool. framework: beakerlib 070701000000E3000081ED0000000000000000000000016391BC3A000009F8000000000000000000000000000000000000004F00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/tuned-adm-functionality/runtest.sh#!/bin/bash # vim: dict=/usr/share/beakerlib/dictionary.vim cpt=.,w,b,u,t,i,k # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # runtest.sh of /CoreOS/tuned/Sanity/tuned-adm-functionality # Description: Check functionality of tuned-adm tool. # Author: Tereza Cerna <tcerna@redhat.com> # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Copyright Red Hat # # SPDX-License-Identifier: GPL-2.0-or-later WITH GPL-CC-1.0 # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Include Beaker environment . /usr/share/beakerlib/beakerlib.sh || exit 1 PACKAGE="tuned" LOG_FILE="profile.log" rlJournalStart rlPhaseStartSetup rlAssertRpm $PACKAGE rlImport "tuned/basic" tunedProfileBackup rlPhaseEnd rlPhaseStartTest "Test tuned-adm LIST" rlServiceStart "tuned" rlRun "tuned-adm list" 0 rlServiceStop "tuned" rlRun "tuned-adm list" 0 rlPhaseEnd rlPhaseStartTest "Test tuned-adm ACTIVE" rlServiceStart "tuned" rlRun "tuned-adm active" 0 rlServiceStop "tuned" rlRun "tuned-adm active" 0 rlPhaseEnd rlPhaseStartTest "Test tuned-adm OFF" rlServiceStart "tuned" rlRun "tuned-adm off" 0 rlServiceStop "tuned" rlRun "tuned-adm off" 1 rlPhaseEnd rlPhaseStartTest "Test tuned-adm PROFILE" rlServiceStart "tuned" rlRun "tuned-adm profile virtual-guest" 0 sleep 5 rlServiceStop "tuned" rlRun "tuned-adm profile virtual-host" 0 sleep 5 rlPhaseEnd rlPhaseStartTest "Test tuned-adm PROFILE_INFO" rlServiceStart "tuned" rlRun "tuned-adm profile_info" 0 rlServiceStop "tuned" rlRun "tuned-adm profile_info" 0 rlPhaseEnd rlPhaseStartTest "Test tuned-adm RECOMMEND" rlServiceStart "tuned" rlRun "tuned-adm recommend" 0 rlServiceStop "tuned" rlRun "tuned-adm recommend" 0 rlPhaseEnd rlPhaseStartTest "Test tuned-adm VERIFY" echo > /var/log/tuned/tuned.log rlServiceStart "tuned" rlRun "tuned-adm verify --ignore-missing" 0 rlRun "cat /var/log/tuned/tuned.log | grep ERROR" 1 rlServiceStop "tuned" rlRun "tuned-adm verify --ignore-missing" 1 rlRun "cat /var/log/tuned/tuned.log | grep ERROR" 1 rlPhaseEnd rlPhaseStartCleanup tunedProfileRestore rlServiceRestore "tuned" rlPhaseEnd rlJournalPrintText rlJournalEnd 070701000000E4000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000004A00000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/variables-support-in-profiles070701000000E5000081A40000000000000000000000016391BC3A0000026A000000000000000000000000000000000000005300000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/variables-support-in-profiles/main.fmfsummary: variables support in profiles description: '' contact: rhack@redhat.com component: - tuned test: ./runtest.sh framework: beakerlib require: - library(tuned/basic) recommend: - tuned duration: 5m enabled: true tag: - FedoraReady - NoRHEL4 - NoRHEL5 - TIPfail_infra - TIPpass - Tier1 tier: '1' link: - relates: https://bugzilla.redhat.com/show_bug.cgi?id=1225124 adjust: - enabled: false when: distro < rhel-7 continue: false extra-nitrate: TC#0496575 extra-summary: /CoreOS/tuned/Sanity/variables-support-in-profiles extra-task: /CoreOS/tuned/Sanity/variables-support-in-profiles 070701000000E6000081ED0000000000000000000000016391BC3A000007DC000000000000000000000000000000000000005500000000tuned-2.19.0.29+git.b894a3e/tests/beakerlib/variables-support-in-profiles/runtest.sh#!/bin/bash # vim: dict+=/usr/share/beakerlib/dictionary.vim cpt=.,w,b,u,t,i,k # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # runtest.sh of /CoreOS/tuned/Sanity/variables-support-in-profiles # Description: variables support in profiles # Author: Branislav Blaskovic <bblaskov@redhat.com> # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Copyright Red Hat # # SPDX-License-Identifier: GPL-2.0-or-later WITH GPL-CC-1.0 # # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # Include Beaker environment . /usr/share/beakerlib/beakerlib.sh || exit 1 PACKAGE="tuned" rlJournalStart rlPhaseStartSetup rlAssertRpm $PACKAGE rlFileBackup --clean /etc/systemd/system.conf.d rlRun "mkdir -p /etc/systemd/system.conf.d" rlRun "echo -e '[Manager]\nDefaultStartLimitInterval=0' > /etc/systemd/system.conf.d/tuned.conf" 0 "Disable systemd rate limiting" rlRun "systemctl daemon-reload" rlImport "tuned/basic" rlRun "TmpDir=\$(mktemp -d)" 0 "Creating tmp directory" rlRun "pushd $TmpDir" rlServiceStart "tuned" tunedProfileBackup rlFileBackup "/usr/lib/tuned/balanced/tuned.conf" echo " [variables] SWAPPINESS1 = 70 SWAPPINESS2 = \${SWAPPINESS1} [sysctl] vm.swappiness = \${SWAPPINESS2} " >> /usr/lib/tuned/balanced/tuned.conf rlRun "cat /usr/lib/tuned/balanced/tuned.conf" OLD_SWAPPINESS=$(sysctl -n vm.swappiness) rlPhaseEnd rlPhaseStartTest rlRun "tuned-adm profile balanced" rlRun -s "sysctl -n vm.swappiness" rlAssertGrep "70" "$rlRun_LOG" rlPhaseEnd rlPhaseStartCleanup rlRun "sysctl vm.swappiness=$OLD_SWAPPINESS" rlFileRestore tunedProfileRestore rlServiceRestore "tuned" rlRun "systemctl daemon-reload" rlRun "popd" rlRun "rm -r $TmpDir" 0 "Removing tmp directory" rlPhaseEnd rlJournalPrintText rlJournalEnd 070701000000E7000081A40000000000000000000000016391BC3A00000013000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/tests/main.fmftest: ./runtest.sh 070701000000E8000041ED0000000000000000000000096391BC3A00000000000000000000000000000000000000000000002700000000tuned-2.19.0.29+git.b894a3e/tests/unit070701000000E9000081A40000000000000000000000016391BC3A00000000000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tests/unit/__init__.py070701000000EA000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/exports070701000000EB000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tests/unit/exports/__init__.pyimport globals 070701000000EC000081A40000000000000000000000016391BC3A00000A54000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/tests/unit/exports/test_controller.pyimport unittest try: from unittest.mock import Mock except ImportError: from mock import Mock from tuned.exports.controller import ExportsController import tuned.exports as exports class ControllerTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls._controller = ExportsController() def test_is_exportable_method(self): self.assertFalse(self._controller._is_exportable_method( \ MockClass().NonExportableObject)) self.assertTrue(self._controller._is_exportable_method( \ MockClass().ExportableMethod)) def test_is_exportable_signal(self): self.assertFalse(self._controller._is_exportable_signal( \ MockClass().NonExportableObject)) self.assertTrue(self._controller._is_exportable_signal( \ MockClass().ExportableSignal)) def test_initialize_exports(self): local_controller = ExportsController() exporter = MockExporter() instance = MockClass() local_controller.register_exporter(exporter) local_controller.register_object(instance) local_controller._initialize_exports() self.assertEqual(exporter.exported_methods[0].method,\ instance.ExportableMethod) self.assertEqual(exporter.exported_methods[0].args[0],\ "method_param1") self.assertEqual(exporter.exported_methods[0].kwargs['kword'],\ "method_param2") self.assertEqual(exporter.exported_signals[0].method,\ instance.ExportableSignal) self.assertEqual(exporter.exported_signals[0].args[0],\ "signal_param1") self.assertEqual(exporter.exported_signals[0].kwargs['kword'],\ "signal_param2") def test_start_stop(self): local_controller = ExportsController() exporter = MockExporter() local_controller.register_exporter(exporter) local_controller.start() self.assertTrue(exporter.is_running) local_controller.stop() self.assertFalse(exporter.is_running) class MockExporter(object): def __init__(self): self.exported_methods = [] self.exported_signals = [] self.is_running = False def export(self,method,*args,**kwargs): object_to_export = Mock(\ method = method, args = args, kwargs = kwargs) self.exported_methods.append(object_to_export) def signal(self,method,*args,**kwargs): object_to_export = Mock(\ method = method, args = args, kwargs = kwargs) self.exported_signals.append(object_to_export) def start(self): self.is_running = True def stop(self): self.is_running = False class MockClass(object): @exports.export('method_param1', kword = 'method_param2') def ExportableMethod(self): return True def NonExportableObject(self): pass @exports.signal('signal_param1', kword = 'signal_param2') def ExportableSignal(self): return True 070701000000ED000081ED0000000000000000000000016391BC3A000000E7000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tests/unit/func_basic.sh#!/bin/bash systemctl restart tuned tuned-adm recommend PROFILES=`tuned-adm list | sed -n '/^\-/ s/^- // p'` for p in $PROFILES do tuned-adm profile "$p" sleep 5 tuned-adm active done tuned-adm profile `tuned-adm recommend` 070701000000EE000081A40000000000000000000000016391BC3A0000009B000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/tests/unit/globals.pyimport logging import tuned.logs logger = logging.getLogger() handler = logging.NullHandler() logger.addHandler(handler) tuned.logs.get = lambda: logger 070701000000EF000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003000000000tuned-2.19.0.29+git.b894a3e/tests/unit/hardware070701000000F0000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tests/unit/hardware/__init__.pyimport globals 070701000000F1000081A40000000000000000000000016391BC3A000004F9000000000000000000000000000000000000004700000000tuned-2.19.0.29+git.b894a3e/tests/unit/hardware/test_device_matcher.pyimport unittest from tuned.hardware.device_matcher import DeviceMatcher class DeviceMatcherTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.matcher = DeviceMatcher() def test_one_positive_rule(self): self.assertTrue(self.matcher.match("sd*", "sda")) self.assertFalse(self.matcher.match("sd*", "hda")) def test_multiple_positive_rules(self): self.assertTrue(self.matcher.match("sd* hd*", "sda")) self.assertTrue(self.matcher.match("sd* hd*", "hda")) self.assertFalse(self.matcher.match("sd* hd*", "dm-0")) def test_implicit_positive(self): self.assertTrue(self.matcher.match("", "sda")) self.assertTrue(self.matcher.match("!sd*", "hda")) self.assertFalse(self.matcher.match("!sd*", "sda")) def test_positve_negative_combination(self): self.assertTrue(self.matcher.match("sd* !sdb", "sda")) self.assertFalse(self.matcher.match("sd* !sdb", "sdb")) def test_positive_first(self): self.assertTrue(self.matcher.match("!sdb sd*", "sda")) self.assertFalse(self.matcher.match("!sdb sd*", "sdb")) def test_match_list(self): devices = ["sda", "sdb", "sdc"] self.assertListEqual(self.matcher.match_list("sd* !sdb", devices), ["sda", "sdc"]) self.assertListEqual(self.matcher.match_list("!sda", devices), ["sdb", "sdc"]) 070701000000F2000081A40000000000000000000000016391BC3A000004C3000000000000000000000000000000000000004C00000000tuned-2.19.0.29+git.b894a3e/tests/unit/hardware/test_device_matcher_udev.pyimport unittest import pyudev from tuned.hardware.device_matcher_udev import DeviceMatcherUdev class DeviceMatcherUdevTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.udev_context = pyudev.Context() cls.matcher = DeviceMatcherUdev() def test_simple_search(self): try: device = pyudev.Devices.from_sys_path(self.udev_context, "/sys/devices/virtual/tty/tty0") except AttributeError: device = pyudev.Device.from_sys_path(self.udev_context, "/sys/devices/virtual/tty/tty0") self.assertTrue(self.matcher.match("tty0", device)) try: device = pyudev.Devices.from_sys_path(self.udev_context, "/sys/devices/virtual/tty/tty1") except AttributeError: device = pyudev.Device.from_sys_path(self.udev_context, "/sys/devices/virtual/tty/tty1") self.assertFalse(self.matcher.match("tty0", device)) def test_regex_search(self): try: device = pyudev.Devices.from_sys_path(self.udev_context, "/sys/devices/virtual/tty/tty0") except AttributeError: device = pyudev.Device.from_sys_path(self.udev_context, "/sys/devices/virtual/tty/tty0") self.assertTrue(self.matcher.match("tty.", device)) self.assertFalse(self.matcher.match("tty[1-9]", device)) 070701000000F3000081A40000000000000000000000016391BC3A000008B1000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/tests/unit/hardware/test_inventory.pyimport unittest try: from unittest.mock import Mock except ImportError: from mock import Mock import pyudev from tuned.hardware.inventory import Inventory subsystem_name = "test subsystem" class InventoryTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls._context = pyudev.Context() cls._inventory = Inventory(set_receive_buffer_size=False) cls._dummy = DummyPlugin() cls._dummier = DummyPlugin() def test_get_device(self): try: device1 = pyudev.Devices.from_name(self._context, "tty", "tty0") except AttributeError: device1 = pyudev.Device.from_name(self._context, "tty", "tty0") device2 = self._inventory.get_device("tty", "tty0") self.assertEqual(device1,device2) def test_get_devices(self): device_list1 = self._context.list_devices(subsystem = "tty") device_list2 = self._inventory.get_devices("tty") try: self.assertCountEqual(device_list1,device_list2) except AttributeError: # Python 2 self.assertItemsEqual(device_list1,device_list2) def test_subscribe(self): self._inventory.subscribe(self._dummy,subsystem_name, self._dummy.TestCallback) self._inventory.subscribe(self._dummier,subsystem_name, self._dummier.TestCallback) device = Mock(subsystem = subsystem_name) self._inventory._handle_udev_event("test event", device) self.assertTrue(self._dummy.CallbackWasCalled) self.assertTrue(self._dummier.CallbackWasCalled) def test_unsubscribe(self): self._dummy.CallbackWasCalled = False self._dummier.CallbackWasCalled = False self._inventory.unsubscribe(self._dummy) device = Mock(subsystem = subsystem_name) self._inventory._handle_udev_event("test event", device) self.assertFalse(self._dummy.CallbackWasCalled) self.assertTrue(self._dummier.CallbackWasCalled) self._dummier.CallbackWasCalled = False self._inventory.unsubscribe(self._dummier) self._inventory._handle_udev_event("test event", device) self.assertFalse(self._dummy.CallbackWasCalled) self.assertFalse(self._dummier.CallbackWasCalled) self.assertIsNone(self._inventory._monitor_observer) class DummyPlugin(): def __init__(self): self.CallbackWasCalled = False def TestCallback(self, event, device): self.CallbackWasCalled = True 070701000000F4000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003000000000tuned-2.19.0.29+git.b894a3e/tests/unit/monitors070701000000F5000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tests/unit/monitors/__init__.pyimport globals 070701000000F6000081A40000000000000000000000016391BC3A00000B39000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/tests/unit/monitors/test_base.pyimport unittest import tuned.monitors.base class MockMonitor(tuned.monitors.base.Monitor): @classmethod def _init_available_devices(cls): cls._available_devices = set(["a", "b"]) @classmethod def update(cls): for device in ["a", "b"]: cls._load.setdefault(device, 0) cls._load[device] += 1 class MonitorBaseClassTestCase(unittest.TestCase): def test_fail_base_class_init(self): with self.assertRaises(NotImplementedError): tuned.monitors.base.Monitor() def test_update_fail_with_base_class(self): with self.assertRaises(NotImplementedError): tuned.monitors.base.Monitor.update() def test_available_devices(self): monitor = MockMonitor() devices = MockMonitor.get_available_devices() self.assertEqual(devices, set(["a", "b"])) monitor.cleanup() def test_registering_instances(self): monitor = MockMonitor() self.assertIn(monitor, MockMonitor.instances()) monitor.cleanup() self.assertNotIn(monitor, MockMonitor.instances()) def test_init_with_devices(self): monitor = MockMonitor() self.assertSetEqual(set(["a", "b"]), monitor.devices) monitor.cleanup() monitor = MockMonitor(["a"]) self.assertSetEqual(set(["a"]), monitor.devices) monitor.cleanup() monitor = MockMonitor([]) self.assertSetEqual(set(), monitor.devices) monitor.cleanup() monitor = MockMonitor(["b", "x"]) self.assertSetEqual(set(["b"]), monitor.devices) monitor.cleanup() def test_add_device(self): monitor = MockMonitor(["a"]) self.assertSetEqual(set(["a"]), monitor.devices) monitor.add_device("x") self.assertSetEqual(set(["a"]), monitor.devices) monitor.add_device("b") self.assertSetEqual(set(["a", "b"]), monitor.devices) monitor.cleanup() def test_remove_device(self): monitor = MockMonitor() self.assertSetEqual(set(["a", "b"]), monitor.devices) monitor.remove_device("a") self.assertSetEqual(set(["b"]), monitor.devices) monitor.remove_device("x") self.assertSetEqual(set(["b"]), monitor.devices) monitor.remove_device("b") self.assertSetEqual(set(), monitor.devices) monitor.cleanup() def test_get_load_from_enabled(self): monitor = MockMonitor() load = monitor.get_load() self.assertIn("a", load) self.assertIn("b", load) monitor.remove_device("a") load = monitor.get_load() self.assertNotIn("a", load) self.assertIn("b", load) monitor.remove_device("b") load = monitor.get_load() self.assertDictEqual({}, load) monitor.cleanup() def test_refresh_of_updating_devices(self): monitor1 = MockMonitor(["a"]) self.assertSetEqual(set(["a"]), MockMonitor._updating_devices) monitor2 = MockMonitor(["a", "b"]) self.assertSetEqual(set(["a", "b"]), MockMonitor._updating_devices) monitor1.cleanup() self.assertSetEqual(set(["a", "b"]), MockMonitor._updating_devices) monitor2.cleanup() self.assertSetEqual(set(), MockMonitor._updating_devices) 070701000000F7000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/plugins070701000000F8000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tests/unit/plugins/__init__.pyimport globals 070701000000F9000081A40000000000000000000000016391BC3A00001E79000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tests/unit/plugins/test_base.pytry: from collections.abc import Mapping except ImportError: from collections import Mapping import tempfile import unittest from tuned.monitors.repository import Repository import tuned.plugins.decorators as decorators from tuned.plugins.base import Plugin import tuned.hardware as hardware import tuned.monitors as monitors import tuned.profiles as profiles import tuned.plugins as plugins import tuned.consts as consts from tuned import storage import tuned.plugins.base temp_storage_file = tempfile.TemporaryFile(mode = 'r') consts.DEFAULT_STORAGE_FILE = temp_storage_file.name monitors_repository = monitors.Repository() hardware_inventory = hardware.Inventory(set_receive_buffer_size=False) device_matcher = hardware.DeviceMatcher() device_matcher_udev = hardware.DeviceMatcherUdev() plugin_instance_factory = plugins.instance.Factory() storage_provider = storage.PickleProvider() storage_factory = storage.Factory(storage_provider) class PluginBaseTestCase(unittest.TestCase): def setUp(self): self._plugin = DummyPlugin(monitors_repository,storage_factory,\ hardware_inventory,device_matcher,device_matcher_udev,\ plugin_instance_factory,None,None) self._commands_plugin = CommandsPlugin(monitors_repository,\ storage_factory,hardware_inventory,device_matcher,\ device_matcher_udev,plugin_instance_factory,None,\ profiles.variables.Variables()) def test_get_effective_options(self): self.assertEqual(self._plugin._get_effective_options(\ {'default_option1':'default_value2'}),\ {'default_option1': 'default_value2',\ 'default_option2': 'default_value2'}) def test_option_bool(self): self.assertTrue(self._plugin._option_bool(True)) self.assertTrue(self._plugin._option_bool('true')) self.assertFalse(self._plugin._option_bool('false')) def test_create_instance(self): instance = self._plugin.create_instance(\ 'first_instance','test','test','test','test',\ {'default_option1':'default_value2'}) self.assertIsNotNone(instance) def test_destroy_instance(self): instance = self._plugin.create_instance(\ 'first_instance','test','test','test','test',\ {'default_option1':'default_value2'}) instance.plugin.init_devices() self._plugin.destroy_instance(instance) self.assertIn(instance,self._plugin.cleaned_instances) def test_get_matching_devices(self): """ without udev regex """ instance = self._plugin.create_instance(\ 'first_instance','right_device*',None,'test','test',\ {'default_option1':'default_value2'}) self.assertEqual(self._plugin._get_matching_devices(\ instance,['bad_device','right_device1','right_device2']),\ set(['right_device1','right_device2'])) """ with udev regex """ instance = self._plugin.create_instance(\ 'second_instance','right_device*','device[1-2]','test','test',\ {'default_option1':'default_value2'}) device1 = DummyDevice('device1',{'name':'device1'}) device2 = DummyDevice('device2',{'name':'device2'}) device3 = DummyDevice('device3',{'name':'device3'}) self.assertEqual(self._plugin._get_matching_devices(\ instance,[device1,device2,device3]),set(['device1','device2'])) def test_autoregister_commands(self): self._commands_plugin._autoregister_commands() self.assertEqual(self._commands_plugin._commands['size']['set'],\ self._commands_plugin._set_size) self.assertEqual(self._commands_plugin._commands['size']['get'],\ self._commands_plugin._get_size) self.assertEqual(\ self._commands_plugin._commands['custom_name']['custom'], self._commands_plugin.the_most_custom_command) def test_check_commands(self): self._commands_plugin._check_commands() with self.assertRaises(TypeError): bad_plugin = BadCommandsPlugin(monitors_repository,storage_factory,\ hardware_inventory,device_matcher,device_matcher_udev,\ plugin_instance_factory,None,None) def test_execute_all_non_device_commands(self): instance = self._commands_plugin.create_instance('test_instance','',\ '','','',{'size':'XXL'}) self._commands_plugin._execute_all_non_device_commands(instance) self.assertEqual(self._commands_plugin._size,'XXL') def test_execute_all_device_commands(self): instance = self._commands_plugin.create_instance('test_instance','',\ '','','',{'device_setting':'010'}) device1 = DummyDevice('device1',{}) device2 = DummyDevice('device2',{}) self._commands_plugin._execute_all_device_commands(instance,\ [device1,device2]) self.assertEqual(device1.setting,'010') self.assertEqual(device2.setting,'010') def test_process_assignment_modifiers(self): self.assertEqual(self._plugin._process_assignment_modifiers('100',None)\ ,'100') self.assertEqual(self._plugin._process_assignment_modifiers(\ '>100','200'),None) self.assertEqual(self._plugin._process_assignment_modifiers(\ '<100','200'),'100') def test_get_current_value(self): instance = self._commands_plugin.create_instance('test_instance','',\ '','','',{}) command = [com for com in self._commands_plugin._commands.values()\ if com['name'] == 'size'][0] self.assertEqual(self._commands_plugin._get_current_value(command),'S') def test_norm_value(self): self.assertEqual(self._plugin._norm_value('"000000021"'),'21') def test_verify_value(self): self.assertEqual(self._plugin._verify_value(\ 'test_value','1',None,True),True) self.assertEqual(self._plugin._verify_value(\ 'test_value','1',None,False),False) self.assertEqual(self._plugin._verify_value(\ 'test_value','00001','001',False),True) self.assertEqual(self._plugin._verify_value(\ 'test_value','0x1a','0x1a',False),True) self.assertEqual(self._plugin._verify_value(\ 'test_value','0x1a','0x1b',False),False) @classmethod def tearDownClass(cls): temp_storage_file.close() class DummyPlugin(Plugin): def __init__(self,*args,**kwargs): super(DummyPlugin,self).__init__(*args,**kwargs) self.cleaned_instances = [] @classmethod def _get_config_options(self): return {'default_option1':'default_value1',\ 'default_option2':'default_value2'} def _instance_cleanup(self, instance): self.cleaned_instances.append(instance) def _get_device_objects(self, devices): objects = [] for device in devices: objects.append({'name':device}) return devices class DummyDevice(Mapping): def __init__(self,sysname,dictionary,*args,**kwargs): super(DummyDevice,self).__init__(*args,**kwargs) self.dictionary = dictionary self.properties = dictionary self.sys_name = sysname self.setting = '101' def __getitem__(self,prop): return self.dictionary.__getitem__(prop) def __len__(self): return self.dictionary.__len__() def __iter__(self): return self.dictionary.__iter__() class CommandsPlugin(Plugin): def __init__(self,*args,**kwargs): super(CommandsPlugin,self).__init__(*args,**kwargs) self._size = 'S' @classmethod def _get_config_options(self): """Default configuration options for the plugin.""" return {'size':'S','device_setting':'101'} @decorators.command_set('size') def _set_size(self, new_size, sim): self._size = new_size return new_size @decorators.command_get('size') def _get_size(self): return self._size @decorators.command_set('device_setting',per_device = True) def _set_device_setting(self,value,device,sim): device.setting = value return device.setting @decorators.command_get('device_setting') def _get_device_setting(self,device,ignore_missing = False): return device.setting @decorators.command_custom('custom_name') def the_most_custom_command(self): return True class BadCommandsPlugin(Plugin): def __init__(self,*args,**kwargs): super(BadCommandsPlugin,self).__init__(*args,**kwargs) self._size = 'S' @decorators.command_set('size') def _set_size(self, new_size): self._size = new_size 070701000000FA000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003000000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles070701000000FB000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/__init__.pyimport globals 070701000000FC000081A40000000000000000000000016391BC3A00000FCB000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/test_loader.pyimport unittest import tempfile import shutil import os import tuned.profiles as profiles from tuned.profiles.exceptions import InvalidProfileException class LoaderTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls._test_dir = tempfile.mkdtemp() cls._profiles_dir = cls._test_dir + '/test_profiles' cls._dummy_profile_dir = cls._profiles_dir + '/dummy' cls._dummy_profile_dir2 = cls._profiles_dir + '/dummy2' cls._dummy_profile_dir3 = cls._profiles_dir + '/dummy3' cls._dummy_profile_dir4 = cls._profiles_dir + '/dummy4' try: os.mkdir(cls._profiles_dir) os.mkdir(cls._dummy_profile_dir) os.mkdir(cls._dummy_profile_dir2) os.mkdir(cls._dummy_profile_dir3) os.mkdir(cls._dummy_profile_dir4) except OSError: pass with open(cls._dummy_profile_dir + '/tuned.conf','w') as f: f.write('[main]\nsummary=dummy profile\n') f.write('[test_unit]\ntest_option=hello\n') f.write('random_option=random\n') with open(cls._dummy_profile_dir2 + '/tuned.conf','w') as f: f.write(\ '[main]\nsummary=second dummy profile\n') f.write('[test_unit]\ntest_option=hello world\n') f.write('secondary_option=whatever\n') with open(cls._dummy_profile_dir3 + '/tuned.conf','w') as f: f.write('[main]\nsummary=another profile\ninclude=dummy\n') f.write('[test_unit]\ntest_option=bye bye\n') f.write('new_option=add this\n') with open(cls._dummy_profile_dir4 + '/tuned.conf','w') as f: f.write(\ '[main]\nsummary=dummy profile for configuration read test\n') f.write('file_path=${i:PROFILE_DIR}/whatever\n') f.write('script=random_name.sh\n') f.write('[test_unit]\ntest_option=hello world\n') f.write('devices=/dev/${variable1},/dev/${variable2}\n') f.write('[variables]\nvariable1=net\nvariable2=cpu') def setUp(self): locator = profiles.Locator([self._profiles_dir]) factory = profiles.Factory() merger = profiles.Merger() self._loader = profiles.Loader(locator,factory,merger,None,\ profiles.variables.Variables()) def test_safe_name(self): self.assertFalse(self._loader.safe_name('*')) self.assertFalse(self._loader.safe_name('$')) self.assertTrue(self._loader.safe_name('Allowed_ch4rs.-')) def test_load_without_include(self): merged_profile = self._loader.load(['dummy','dummy2']) self.assertEqual(merged_profile.name, 'dummy dummy2') self.assertEqual(merged_profile.options['summary'],\ 'second dummy profile') self.assertEqual(merged_profile.units['test_unit'].\ options['test_option'],'hello world') self.assertEqual(merged_profile.units['test_unit'].\ options['secondary_option'],'whatever') with self.assertRaises(InvalidProfileException): self._loader.load([]) with self.assertRaises(InvalidProfileException): self._loader.load(['invalid']) def test_load_with_include(self): merged_profile = self._loader.load(['dummy3']) self.assertEqual(merged_profile.name,'dummy3') self.assertEqual(merged_profile.options['summary'],'another profile') self.assertEqual(merged_profile.units['test_unit'].\ options['test_option'],'bye bye') self.assertEqual(merged_profile.units['test_unit'].\ options['new_option'],'add this') self.assertEqual(merged_profile.units['test_unit'].\ options['random_option'],'random') def test_expand_profile_dir(self): self.assertEqual(self._loader._expand_profile_dir(\ '/hello/world','${i:PROFILE_DIR}/file'),'/hello/world/file') def test_load_config_data(self): config = self._loader._load_config_data(\ self._dummy_profile_dir4 + '/tuned.conf') self.assertEqual(config['main']['script'][0],\ self._dummy_profile_dir4 + '/random_name.sh') self.assertEqual(config['main']['file_path'],\ self._dummy_profile_dir4 + '/whatever') self.assertEqual(config['test_unit']['test_option'],\ 'hello world') def test_variables(self): config = self._loader.load(['dummy4']) self.assertEqual(config.units['test_unit'].devices,\ '/dev/net,/dev/cpu') @classmethod def tearDownClass(cls): shutil.rmtree(cls._test_dir) 070701000000FD000081A40000000000000000000000016391BC3A00000C4A000000000000000000000000000000000000004000000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/test_locator.pyimport unittest import os import shutil import tempfile from tuned.profiles.locator import Locator class LocatorTestCase(unittest.TestCase): def setUp(self): self.locator = Locator(self._tmp_load_dirs) @classmethod def setUpClass(cls): tmpdir1 = tempfile.mkdtemp() tmpdir2 = tempfile.mkdtemp() cls._tmp_load_dirs = [tmpdir1, tmpdir2] cls._create_profile(tmpdir1, "balanced") cls._create_profile(tmpdir1, "powersafe") cls._create_profile(tmpdir2, "custom") cls._create_profile(tmpdir2, "balanced") @classmethod def tearDownClass(cls): for tmp_dir in cls._tmp_load_dirs: shutil.rmtree(tmp_dir, True) @classmethod def _create_profile(cls, load_dir, profile_name): profile_dir = os.path.join(load_dir, profile_name) conf_name = os.path.join(profile_dir, "tuned.conf") os.mkdir(profile_dir) with open(conf_name, "w") as conf_file: if profile_name != "custom": conf_file.write("[main]\nsummary=this is " + profile_name + "\n") else: conf_file.write("summary=this is " + profile_name + "\n") def test_init(self): Locator([]) def test_init_invalid_type(self): with self.assertRaises(TypeError): Locator("string") def test_get_known_names(self): known = self.locator.get_known_names() self.assertListEqual(known, ["balanced", "custom", "powersafe"]) def test_get_config(self): config_name = self.locator.get_config("custom") self.assertEqual(config_name, os.path.join(self._tmp_load_dirs[1], "custom", "tuned.conf")) # none matched, none skipped config_name = self.locator.get_config("non-existent") self.assertIsNone(config_name) def test_get_config_priority(self): customized = self.locator.get_config("balanced") self.assertEqual(customized, os.path.join(self._tmp_load_dirs[1], "balanced", "tuned.conf")) system = self.locator.get_config("balanced", [customized]) self.assertEqual(system, os.path.join(self._tmp_load_dirs[0], "balanced", "tuned.conf")) # none matched, but at least one skipped empty = self.locator.get_config("balanced", [customized, system]) self.assertEqual(empty, "") def test_ignore_nonexistent_dirs(self): locator = Locator([self._tmp_load_dirs[0], "/tmp/some-dir-which-does-not-exist-for-sure"]) balanced = locator.get_config("balanced") self.assertEqual(balanced, os.path.join(self._tmp_load_dirs[0], "balanced", "tuned.conf")) known = locator.get_known_names() self.assertListEqual(known, ["balanced", "powersafe"]) def test_get_known_names_summary(self): self.assertEqual(("balanced", "this is balanced"), sorted(self.locator.get_known_names_summary())[0]) def test_get_profile_attrs(self): attrs = self.locator.get_profile_attrs("balanced", ["summary", "wrong_attr"], ["this is default", "this is wrong attr"]) self.assertEqual([True, "balanced", "this is balanced", "this is wrong attr"], attrs) attrs = self.locator.get_profile_attrs("custom", ["summary"], ["wrongly writen profile"]) self.assertEqual([True, "custom", "wrongly writen profile"], attrs) attrs = self.locator.get_profile_attrs("different", ["summary"], ["non existing profile"]) self.assertEqual([False, "", "", ""], attrs) 070701000000FE000081A40000000000000000000000016391BC3A0000093E000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/test_merger.pyimport unittest from tuned.profiles.merger import Merger from tuned.profiles.profile import Profile from collections import OrderedDict class MergerTestCase(unittest.TestCase): def test_merge_without_replace(self): merger = Merger() config1 = OrderedDict([ ("main", {"test_option" : "test_value1"}), ("net", { "devices": "em0", "custom": "custom_value"}), ]) profile1 = Profile('test_profile1',config1) config2 = OrderedDict([ ('main', {'test_option' : 'test_value2'}), ('net', { 'devices': 'em1' }), ]) profile2 = Profile("test_profile2",config2) merged_profile = merger.merge([profile1, profile2]) self.assertEqual(merged_profile.options["test_option"],"test_value2") self.assertIn("net", merged_profile.units) self.assertEqual(merged_profile.units["net"].options["custom"],\ "custom_value") self.assertEqual(merged_profile.units["net"].devices, "em1") def test_merge_with_replace(self): merger = Merger() config1 = OrderedDict([ ("main", {"test_option" : "test_value1"}), ("net", { "devices": "em0", "custom": "option"}), ]) profile1 = Profile('test_profile1',config1) config2 = OrderedDict([ ("main", {"test_option" : "test_value2"}), ("net", { "devices": "em1", "replace": True }), ]) profile2 = Profile('test_profile2',config2) merged_profile = merger.merge([profile1, profile2]) self.assertEqual(merged_profile.options["test_option"],"test_value2") self.assertIn("net", merged_profile.units) self.assertNotIn("custom", merged_profile.units["net"].options) self.assertEqual(merged_profile.units["net"].devices, "em1") def test_merge_multiple_order(self): merger = Merger() config1 = OrderedDict([ ("main", {"test_option" : "test_value1"}),\ ("net", { "devices": "em0" }) ]) profile1 = Profile('test_profile1',config1) config2 = OrderedDict([ ("main", {"test_option" : "test_value2"}),\ ("net", { "devices": "em1" }) ]) profile2 = Profile('test_profile2',config2) config3 = OrderedDict([ ("main", {"test_option" : "test_value3"}),\ ("net", { "devices": "em2" }) ]) profile3 = Profile('test_profile3',config3) merged_profile = merger.merge([profile1, profile2, profile3]) self.assertEqual(merged_profile.options["test_option"],"test_value3") self.assertIn("net", merged_profile.units) self.assertEqual(merged_profile.units["net"].devices, "em2") 070701000000FF000081A40000000000000000000000016391BC3A00000699000000000000000000000000000000000000004000000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/test_profile.pyimport unittest import tuned.profiles import collections class MockProfile(tuned.profiles.profile.Profile): def _create_unit(self, name, config): return (name, config) class ProfileTestCase(unittest.TestCase): def test_init(self): MockProfile("test", {}) def test_create_units(self): profile = MockProfile("test", { "main": { "anything": 10 }, "network" : { "type": "net", "devices": "*" }, "storage" : { "type": "disk" }, }) self.assertIs(type(profile.units), collections.OrderedDict) self.assertEqual(len(profile.units), 2) self.assertListEqual(sorted([name_config for name_config in profile.units]), sorted(["network", "storage"])) def test_create_units_empty(self): profile = MockProfile("test", {"main":{}}) self.assertIs(type(profile.units), collections.OrderedDict) self.assertEqual(len(profile.units), 0) def test_sets_name(self): profile1 = MockProfile("test_one", {}) profile2 = MockProfile("test_two", {}) self.assertEqual(profile1.name, "test_one") self.assertEqual(profile2.name, "test_two") def test_change_name(self): profile = MockProfile("oldname", {}) self.assertEqual(profile.name, "oldname") profile.name = "newname" self.assertEqual(profile.name, "newname") def test_sets_options(self): profile = MockProfile("test", { "main": { "anything": 10 }, "network" : { "type": "net", "devices": "*" }, }) self.assertIs(type(profile.options), dict) self.assertEqual(profile.options["anything"], 10) def test_sets_options_empty(self): profile = MockProfile("test", { "storage" : { "type": "disk" }, }) self.assertIs(type(profile.options), dict) self.assertEqual(len(profile.options), 0) 07070100000100000081A40000000000000000000000016391BC3A0000048E000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/test_unit.pyimport unittest from tuned.profiles import Unit class UnitTestCase(unittest.TestCase): def test_default_options(self): unit = Unit("sample", {}) self.assertEqual(unit.name, "sample") self.assertEqual(unit.type, "sample") self.assertTrue(unit.enabled) self.assertFalse(unit.replace) self.assertDictEqual(unit.options, {}) def test_option_type(self): unit = Unit("sample", {"type": "net"}) self.assertEqual(unit.type, "net") def test_option_enabled(self): unit = Unit("sample", {"enabled": False}) self.assertFalse(unit.enabled) unit.enabled = True self.assertTrue(unit.enabled) def test_option_replace(self): unit = Unit("sample", {"replace": True}) self.assertTrue(unit.replace) def test_option_custom(self): unit = Unit("sample", {"enabled": True, "type": "net", "custom": "value", "foo": "bar"}) self.assertDictEqual(unit.options, {"custom": "value", "foo": "bar"}) unit.options = {"hello": "world"} self.assertDictEqual(unit.options, {"hello": "world"}) def test_parsing_options(self): unit = Unit("sample", {"type": "net", "enabled": True, "replace": True, "other": "foo"}) self.assertEqual(unit.type, "net") 07070100000101000081A40000000000000000000000016391BC3A00000334000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/tests/unit/profiles/test_variables.pyimport unittest import tempfile import shutil from tuned.profiles import variables, profile class VariablesTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.test_dir = tempfile.mkdtemp() with open(cls.test_dir + "/variables", 'w') as f: f.write("variable1=var1\n") def test_from_file(self): v = variables.Variables() v.add_from_file(self.test_dir + "/variables") self.assertEqual("This is var1", v.expand("This is ${variable1}")) def test_from_unit(self): mock_unit = { "include": self.test_dir + "/variables", "variable2": "var2" } v = variables.Variables() v.add_from_cfg(mock_unit) self.assertEqual("This is var1 and this is var2", v.expand("This is ${variable1} and this is ${variable2}")) @classmethod def tearDownClass(cls): shutil.rmtree(cls.test_dir) 07070100000102000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/storage07070100000103000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tests/unit/storage/__init__.pyimport globals 07070100000104000081A40000000000000000000000016391BC3A0000029C000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/storage/test_factory.pyimport unittest try: from unittest.mock import Mock except ImportError: from mock import Mock import tuned.storage class StorageFactoryTestCase(unittest.TestCase): def test_create(self): mock_provider = Mock() factory = tuned.storage.Factory(mock_provider) self.assertEqual(mock_provider, factory.provider) def test_create_storage(self): mock_provider = Mock() factory = tuned.storage.Factory(mock_provider) storage_foo = factory.create("foo") storage_bar = factory.create("bar") self.assertIsInstance(storage_foo, tuned.storage.Storage) self.assertIsInstance(storage_bar, tuned.storage.Storage) self.assertIsNot(storage_foo, storage_bar) 07070100000105000081A40000000000000000000000016391BC3A000008C5000000000000000000000000000000000000004700000000tuned-2.19.0.29+git.b894a3e/tests/unit/storage/test_pickle_provider.pyimport unittest import os.path import tempfile import tuned.storage import tuned.consts as consts temp_storage_file = tempfile.TemporaryFile(mode='r') consts.DEFAULT_STORAGE_FILE = temp_storage_file.name class StoragePickleProviderTestCase(unittest.TestCase): def setUp(self): (handle, filename) = tempfile.mkstemp() self._temp_filename = filename def tearDown(self): if os.path.exists(self._temp_filename): os.unlink(self._temp_filename) def test_default_path(self): provider = tuned.storage.PickleProvider(self._temp_filename) self.assertEqual(self._temp_filename, provider._path) provider = tuned.storage.PickleProvider() self.assertEqual(temp_storage_file.name, provider._path) def test_memory_persistence(self): provider = tuned.storage.PickleProvider(self._temp_filename) self.assertEqual("default", provider.get("ns1", "opt1", "default")) self.assertIsNone(provider.get("ns2", "opt1")) provider.set("ns1", "opt1", "value1") provider.set("ns1", "opt2", "value2") provider.set("ns2", "opt1", "value3") self.assertEqual("value1", provider.get("ns1", "opt1")) self.assertEqual("value2", provider.get("ns1", "opt2")) self.assertEqual("value3", provider.get("ns2", "opt1")) provider.unset("ns1", "opt1") self.assertIsNone(provider.get("ns1", "opt1")) self.assertEqual("value2", provider.get("ns1", "opt2")) provider.clear() self.assertIsNone(provider.get("ns1", "opt1")) self.assertIsNone(provider.get("ns1", "opt2")) self.assertIsNone(provider.get("ns2", "opt1")) def test_file_persistence(self): provider = tuned.storage.PickleProvider(self._temp_filename) provider.load() provider.set("ns1", "opt1", "value1") provider.set("ns2", "opt2", "value2") provider.save() del provider provider = tuned.storage.PickleProvider(self._temp_filename) provider.load() self.assertEqual("value1", provider.get("ns1", "opt1")) self.assertEqual("value2", provider.get("ns2", "opt2")) provider.clear() del provider provider = tuned.storage.PickleProvider(self._temp_filename) provider.load() self.assertIsNone(provider.get("ns1", "opt1")) self.assertIsNone(provider.get("ns2", "opt2")) @classmethod def tearDownClass(cls): temp_storage_file.close() 07070100000106000081A40000000000000000000000016391BC3A000003E6000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tests/unit/storage/test_storage.pyimport unittest try: from unittest.mock import Mock from unittest.mock import call except ImportError: from mock import Mock from mock import call import tuned.storage class StorageStorageTestCase(unittest.TestCase): def test_set(self): mock_provider = Mock() factory = tuned.storage.Factory(mock_provider) storage = factory.create("foo") storage.set("optname", "optval") mock_provider.set.assert_called_once_with("foo", "optname", "optval") def test_get(self): mock_provider = Mock() mock_provider.get.side_effect = [ None, "defval", "somevalue" ] factory = tuned.storage.Factory(mock_provider) storage = factory.create("foo") self.assertEqual(None, storage.get("optname")) self.assertEqual("defval", storage.get("optname", "defval")) self.assertEqual("somevalue", storage.get("existing")) calls = [ call("foo", "optname", None), call("foo", "optname", "defval"), call("foo", "existing", None) ] mock_provider.get.assert_has_calls(calls) 07070100000107000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/tests/unit/utils07070100000108000081A40000000000000000000000016391BC3A0000000F000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tests/unit/utils/__init__.pyimport globals 07070100000109000081A40000000000000000000000016391BC3A000022E2000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/tests/unit/utils/test_commands.pyimport unittest import tempfile import shutil import re import os from tuned.utils.commands import commands import tuned.consts as consts from tuned.exceptions import TunedException import tuned.utils.commands class CommandsTestCase(unittest.TestCase): def setUp(self): self._commands = commands() self._test_dir = tempfile.mkdtemp() self._test_file = tempfile.NamedTemporaryFile(mode='r',dir = self._test_dir) def test_get_bool(self): positive_values = ['y','yes','t','true'] negative_values = ['n','no','f','false'] for val in positive_values: self.assertEqual(self._commands.get_bool(val),"1") for val in negative_values: self.assertEqual(self._commands.get_bool(val),"0") self.assertEqual(self._commands.get_bool('bad_value'),'bad_value') def test_remove_ws(self): self.assertEqual(self._commands.remove_ws(' a bc '),'a bc') def test_unquote(self): self.assertEqual(self._commands.unquote('"whatever"'),'whatever') def test_escape(self): self.assertEqual(self._commands.escape('\\'),'\\\\') def test_unescape(self): self.assertEqual(self._commands.unescape('\\'),'') def test_align_str(self): self.assertEqual(self._commands.align_str('abc',5,'def'),'abc def') def test_dict2list(self): dictionary = {'key1':1,'key2':2,'key3':3} self.assertEqual(self._commands.dict2list(dictionary)\ ,['key1',1,'key2',2,'key3',3]) def test_re_lookup_compile(self): pattern = re.compile(r'([1-9])') dictionary = {'[1-9]':''} self.assertEqual(self._commands.re_lookup_compile(dictionary).pattern\ ,pattern.pattern) self.assertIsNone(self._commands.re_lookup_compile(None)) def test_multiple_re_replace(self): text = 'abcd1234' dictionary = {'abc':'gfh'} pattern = self._commands.re_lookup_compile(dictionary) self.assertEqual(self._commands.multiple_re_replace(dictionary,text)\ ,'gfhd1234') self.assertEqual(self._commands.multiple_re_replace(\ dictionary,text,pattern),'gfhd1234') def test_re_lookup(self): dictionary = {'abc':'abc','mno':'mno'} text1 = 'abc def' text2 = 'jkl mno' text12 = 'abc mno' text3 = 'whatever' self.assertEqual(self._commands.re_lookup(dictionary,text1),'abc') self.assertEqual(self._commands.re_lookup(dictionary,text2),'mno') self.assertEqual(self._commands.re_lookup(dictionary,text12),'abc') self.assertIsNone(self._commands.re_lookup(dictionary,text3),None) def test_write_to_file(self): self.assertTrue(self._commands.write_to_file(self._test_file.name,\ 'hello world')) with open(self._test_file.name,'r') as f: self.assertEqual(f.read(),'hello world') self.assertTrue(self._commands.write_to_file(self._test_file.name,\ 'world hello')) with open(self._test_file.name,'r') as f: self.assertEqual(f.read(),'world hello') local_test_file = self._test_dir + '/dir' +'/self._test_file' self.assertTrue(self._commands.write_to_file(local_test_file,\ 'hello world',True)) with open(local_test_file,'r') as f: self.assertEqual(f.read(),'hello world') shutil.rmtree(os.path.dirname(local_test_file)) self.assertFalse(self._commands.write_to_file(local_test_file,\ 'hello world')) def test_read_file(self): with open(self._test_file.name,'w') as f: f.write('hello world') self.assertEqual(self._commands.read_file(self._test_file.name),\ 'hello world') self.assertEqual(self._commands.read_file('/bad_name','error'),\ 'error') def test_rmtree(self): test_tree = self._test_dir + '/one/two' os.makedirs(test_tree) test_tree = self._test_dir + '/one' self.assertTrue(self._commands.rmtree(test_tree)) self.assertFalse(os.path.isdir(test_tree)) self.assertTrue(self._commands.rmtree(test_tree)) def test_unlink(self): local_test_file = self._test_dir + 'file_to_delete' open(local_test_file,'w').close() self.assertTrue(os.path.exists(local_test_file)) self.assertTrue(self._commands.unlink(local_test_file)) self.assertFalse(os.path.exists(local_test_file)) self.assertTrue(self._commands.unlink(local_test_file)) def test_rename(self): rename_test_file = self._test_dir + '/bad_name' open(rename_test_file,'w').close() self.assertTrue(self._commands.rename(rename_test_file,\ self._test_dir + '/right_name')) self.assertTrue(os.path.exists(self._test_dir + '/right_name')) os.remove(self._test_dir + '/right_name') def test_copy(self): copy_test_file = self._test_dir + '/origo' with open(copy_test_file,'w') as f: f.write('hello world') self.assertTrue(self._commands.copy(copy_test_file,\ self._test_dir + '/copy')) self.assertTrue(os.path.exists(self._test_dir + '/copy')) self.assertTrue(os.path.exists(self._test_dir + '/origo')) with open(self._test_dir + '/copy','r') as f: self.assertEqual(f.read(),'hello world') os.remove(self._test_dir + '/origo') os.remove(self._test_dir + '/copy') def test_replace_in_file(self): with open(self._test_file.name,'w') as f: f.write('hello world') self.assertTrue(self._commands.replace_in_file(self._test_file.name,\ 'hello','bye')) with open(self._test_file.name,'r') as f: self.assertEqual(f.read(),'bye world') def test_multiple_replace_in_file(self): dictionary = {'abc':'123','ghi':'456'} with open(self._test_file.name,'w') as f: f.write('abc def ghi') self.assertTrue(self._commands.multiple_replace_in_file(\ self._test_file.name,dictionary)) with open(self._test_file.name,'r') as f: self.assertEqual(f.read(),'123 def 456') def test_add_modify_option_in_file(self): with open(self._test_file.name,'w') as f: f.write('option1="123"\noption2="456"\n') dictionary = {'option3':789,'option1':321} self.assertTrue(self._commands.add_modify_option_in_file(\ self._test_file.name,dictionary)) with open(self._test_file.name,'r') as f: self.assertEqual(f.read(),\ 'option1="321"\noption2="456"\noption3="789"\n') def test_get_active_option(self): self.assertEqual(self._commands.get_active_option('opt1 [opt2] opt3'),\ 'opt2') self.assertEqual(self._commands.get_active_option('opt1 opt2 opt3'),\ 'opt1') self.assertEqual(self._commands.get_active_option(\ 'opt1 opt2 opt3',False),'opt1 opt2 opt3') def test_hex2cpulist(self): self.assertEqual(self._commands.hex2cpulist('0xf'),[0,1,2,3]) self.assertEqual(self._commands.hex2cpulist('0x1,0000,0001'),[0,32]) def test_cpulist_unpack(self): cpus = '4-8,^6,0xf00,,!10-11' self.assertEqual(self._commands.cpulist_unpack(cpus),[4,5,7,8,9]) cpus = '\'\'"4-8,^6\'"' self.assertEqual(self._commands.cpulist_unpack(cpus, None),[]) cpus = '\'\'"4-8,^6\'"' self.assertEqual(self._commands.cpulist_unpack(cpus),[4,5,7,8]) cpus = '"4-8\',^6\'"' self.assertEqual(self._commands.cpulist_unpack(cpus),[]) cpus = '1,2,3-x' self.assertEqual(self._commands.cpulist_unpack(cpus),[]) cpus = '1,2,!3-x' self.assertEqual(self._commands.cpulist_unpack(cpus),[]) cpus = {"1": "1"} self.assertEqual(self._commands.cpulist_unpack(cpus),[]) def test_cpulist_pack(self): self.assertEqual(self._commands.cpulist_pack([0,1,3,4,5,6,8,9,32]),\ ['0-1','3-6','8-9','32']) def test_cpulist2hex(self): self.assertEqual(self._commands.cpulist2hex('1-3,5,32'),\ '00000001,0000002e') def test_cpulist2bitmask(self): self.assertEqual(self._commands.cpulist2bitmask([1,2,3]),0b1110) self.assertEqual(self._commands.cpulist2bitmask([2,4,6]),0b1010100) def test_get_size(self): self.assertEqual(self._commands.get_size('100KB'),102400) self.assertEqual(self._commands.get_size('100Kb'),102400) self.assertEqual(self._commands.get_size('100kb'),102400) self.assertEqual(self._commands.get_size('1MB'),1024 * 1024) self.assertEqual(self._commands.get_size('1GB'),1024 * 1024 * 1024) def test_get_active_profile(self): consts.ACTIVE_PROFILE_FILE = self._test_dir + '/active_profile' consts.PROFILE_MODE_FILE = self._test_dir + '/profile_mode' with open(consts.ACTIVE_PROFILE_FILE,'w') as f: f.write('test_profile') with open(consts.PROFILE_MODE_FILE,'w') as f: f.write('auto') (profile,mode) = self._commands.get_active_profile() self.assertEqual(profile,'test_profile') self.assertEqual(mode,False) os.remove(consts.ACTIVE_PROFILE_FILE) os.remove(consts.PROFILE_MODE_FILE) (profile,mode) = self._commands.get_active_profile() self.assertEqual(profile,None) self.assertEqual(mode,None) def test_save_active_profile(self): consts.ACTIVE_PROFILE_FILE = self._test_dir + '/active_profile' consts.PROFILE_MODE_FILE = self._test_dir + '/profile_mode' self._commands.save_active_profile('test_profile',False) with open(consts.ACTIVE_PROFILE_FILE) as f: self.assertEqual(f.read(),'test_profile\n') with open(consts.PROFILE_MODE_FILE) as f: self.assertEqual(f.read(),'auto\n') os.remove(consts.ACTIVE_PROFILE_FILE) os.remove(consts.PROFILE_MODE_FILE) def tearDown(self): self._test_file.close() shutil.rmtree(self._test_dir) 0707010000010A000081A40000000000000000000000016391BC3A00000684000000000000000000000000000000000000004300000000tuned-2.19.0.29+git.b894a3e/tests/unit/utils/test_global_config.pyimport unittest import tempfile import shutil import os import tuned.consts as consts import tuned.utils.global_config as global_config class GlobalConfigTestCase(unittest.TestCase): @classmethod def setUpClass(cls): cls.test_dir = tempfile.mkdtemp() with open(cls.test_dir + '/test_config','w') as f: f.write('test_option = hello #this is comment\ntest_bool = 1\ntest_size = 12MB\n'\ + '/sys/bus/pci/devices/0000:00:02.0/power/control=auto\n'\ + '/sys/bus/pci/devices/0000:04:00.0/power/control=auto\n'\ + 'false_bool=0\n'\ + consts.CFG_LOG_FILE_COUNT + " = " + str(consts.CFG_DEF_LOG_FILE_COUNT) + "1\n") cls._global_config = global_config.GlobalConfig(\ cls.test_dir + '/test_config') def test_get(self): self.assertEqual(self._global_config.get('test_option'), 'hello') self.assertEqual(self._global_config.get('/sys/bus/pci/devices/0000:00:02.0/power/control'), 'auto') def test_get_bool(self): self.assertTrue(self._global_config.get_bool('test_bool')) self.assertFalse(self._global_config.get_bool('false_bool')) def test_get_size(self): self.assertEqual(self._global_config.get_size('test_size'),\ 12*1024*1024) self._global_config.set('test_size', 'bad_value') self.assertIsNone(self._global_config.get_size('test_size')) def test_default(self): daemon = self._global_config.get(consts.CFG_DAEMON) self.assertEqual(daemon, consts.CFG_DEF_DAEMON) log_file_count = self._global_config.get(consts.CFG_LOG_FILE_COUNT) self.assertIsNotNone(log_file_count) self.assertNotEqual(log_file_count, consts.CFG_DEF_LOG_FILE_COUNT) @classmethod def tearDownClass(cls): shutil.rmtree(cls.test_dir) 0707010000010B000041ED00000000000000000000000D6391BC3A00000000000000000000000000000000000000000000002200000000tuned-2.19.0.29+git.b894a3e/tuned0707010000010C000081A40000000000000000000000016391BC3A0000025C000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/tuned-adm.bash# bash completion for tuned-adm _tuned_adm() { local commands="active list off profile recommend verify --version -v --help -h auto_profile profile_mode profile_info" local cur prev words cword _init_completion || return if [[ "$cword" -eq 1 ]]; then COMPREPLY=( $(compgen -W "$commands" -- "$cur" ) ) elif [[ "$cword" -eq 2 && ("$prev" == "profile" || "$prev" == "profile_info") ]]; then COMPREPLY=( $(compgen -W "$(command find /usr/lib/tuned /etc/tuned -mindepth 1 -maxdepth 1 -type d -printf "%f\n")" -- "$cur" ) ) else COMPREPLY=() fi return 0 } && complete -F _tuned_adm tuned-adm 0707010000010D000081ED0000000000000000000000016391BC3A000014DF000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/tuned-adm.py#!/usr/bin/python3 -Es # # tuned: daemon for monitoring and adaptive tuning of system devices # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function import argparse import sys import traceback import tuned.admin import tuned.consts as consts import tuned.version as ver from tuned.utils.global_config import GlobalConfig def check_positive(value): try: val = int(value) except ValueError: val = -1 if val <= 0: raise argparse.ArgumentTypeError("%s has to be >= 0" % value) return val def check_log_level(value): try: return consts.CAPTURE_LOG_LEVELS[value.lower()] except KeyError: levels = ", ".join(consts.CAPTURE_LOG_LEVELS.keys()) raise argparse.ArgumentTypeError( "Invalid log level: %s. Valid log levels: %s." % (value, levels)) if __name__ == "__main__": config = GlobalConfig() parser = argparse.ArgumentParser(description="Manage tuned daemon.") parser.add_argument('--version', "-v", action = "version", version = "%%(prog)s %s.%s.%s" % (ver.TUNED_VERSION_MAJOR, ver.TUNED_VERSION_MINOR, ver.TUNED_VERSION_PATCH)) parser.add_argument("--debug", "-d", action="store_true", help="show debug messages") parser.add_argument("--async", "-a", action="store_true", help="with dbus do not wait on commands completion and return immediately") parser.add_argument("--timeout", "-t", default = consts.ADMIN_TIMEOUT, type = check_positive, help="with sync operation use specific timeout instead of the default %d second(s)" % consts.ADMIN_TIMEOUT) levels = ", ".join(consts.CAPTURE_LOG_LEVELS.keys()) help = "level of log messages to capture (one of %s). Default: %s" \ % (levels, consts.CAPTURE_LOG_LEVEL) parser.add_argument("--loglevel", "-l", default = consts.CAPTURE_LOG_LEVEL, type = check_log_level, help = help) subparsers = parser.add_subparsers() parser_list = subparsers.add_parser("list", help="list available profiles or plugins (by default profiles)") parser_list.set_defaults(action="list") parser_list.add_argument("list_choice", nargs="?",default="profiles", choices=["plugins","profiles"], help="choose what to list", metavar="{plugins|profiles}") parser_list.add_argument("--verbose", "-v", action="store_true", help="show plugin's configuration parameters and their meaning") parser_active = subparsers.add_parser("active", help="show active profile") parser_active.set_defaults(action="active") parser_off = subparsers.add_parser("off", help="switch off all tunings") parser_off.set_defaults(action="off") parser_profile = subparsers.add_parser("profile", help="switch to a given profile, or list available profiles if no profile is given") parser_profile.set_defaults(action="profile") parser_profile.add_argument("profiles", metavar="profile", type=str, nargs="*", help="profile name") parser_profile_info = subparsers.add_parser("profile_info", help="show information/description of given profile or current profile if no profile is specified") parser_profile_info.set_defaults(action="profile_info") parser_profile_info.add_argument("profile", metavar="profile", type=str, nargs="?", default="", help="profile name, current profile if not specified") if config.get(consts.CFG_RECOMMEND_COMMAND, consts.CFG_DEF_RECOMMEND_COMMAND): parser_off = subparsers.add_parser("recommend", help="recommend profile") parser_off.set_defaults(action="recommend_profile") parser_verify = subparsers.add_parser("verify", help="verify profile") parser_verify.set_defaults(action="verify_profile") parser_verify.add_argument("--ignore-missing", "-i", action="store_true", help="do not treat missing/non-supported tunings as errors") parser_auto_profile = subparsers.add_parser("auto_profile", help="enable automatic profile selection mode, switch to the recommended profile") parser_auto_profile.set_defaults(action="auto_profile") parser_profile_mode = subparsers.add_parser("profile_mode", help="show current profile selection mode") parser_profile_mode.set_defaults(action="profile_mode") args = parser.parse_args(sys.argv[1:]) options = vars(args) debug = options.pop("debug") asynco = options.pop("async") timeout = options.pop("timeout") try: action_name = options.pop("action") except KeyError: parser.print_usage(file = sys.stderr) sys.exit(1) log_level = options.pop("loglevel") result = False dbus = config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) try: admin = tuned.admin.Admin(dbus, debug, asynco, timeout, log_level) result = admin.action(action_name, **options) except: traceback.print_exc() sys.exit(3) if result == False: sys.exit(1) else: sys.exit(0) 0707010000010E000081A40000000000000000000000016391BC3A000000F8000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/tuned-gui.desktop[Desktop Entry] Name=tuned-gui GenericName=tuned Comment=GTK GUI that can control TuneD daemon and provides simple profile editor Exec=/usr/sbin/tuned-gui Icon=tuned Terminal=false Type=Application Categories=Settings;HardwareSettings; Version=1.0 0707010000010F000081A40000000000000000000000016391BC3A00013496000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/tuned-gui.glade<?xml version="1.0" encoding="UTF-8"?> <!-- Generated with glade 3.22.1 --> <interface> <requires lib="gtk+" version="3.10"/> <object class="GtkAboutDialog" id="aboutdialog1"> <property name="can_focus">False</property> <property name="type_hint">dialog</property> <property name="program_name">TuneD Manager</property> <property name="logo_icon_name">image-missing</property> <child> <placeholder/> </child> <child internal-child="vbox"> <object class="GtkBox" id="aboutdialog-vbox1"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox" id="aboutdialog-action_area1"> <property name="can_focus">False</property> <property name="layout_style">end</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">0</property> </packing> </child> <child> <placeholder/> </child> </object> </child> </object> <object class="GtkDialog" id="changeValueDialog"> <property name="can_focus">False</property> <property name="type_hint">dialog</property> <child> <placeholder/> </child> <child internal-child="vbox"> <object class="GtkBox" id="dialog-vbox4"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox" id="dialog-action_area4"> <property name="can_focus">False</property> <property name="layout_style">end</property> <child> <object class="GtkButton" id="buttonApplyChangeValue"> <property name="label">gtk-apply</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="use_stock">True</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonCancel1"> <property name="label">gtk-cancel</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="use_stock">True</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkBox" id="box10"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkLabel" id="labelTextDialogChangeValue"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <property name="hexpand">True</property> <property name="vexpand">True</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkEntry" id="entry1"> <property name="width_request">300</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="margin_top">30</property> <property name="margin_bottom">30</property> <property name="hexpand">True</property> <property name="vexpand">True</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> </child> <action-widgets> <action-widget response="1">buttonApplyChangeValue</action-widget> <action-widget response="-1">buttonCancel1</action-widget> </action-widgets> </object> <object class="GtkDialog" id="dialogAddPlugin"> <property name="can_focus">False</property> <property name="type_hint">dialog</property> <child> <placeholder/> </child> <child internal-child="vbox"> <object class="GtkBox" id="dialog-vbox2"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox" id="dialog-action_area2"> <property name="can_focus">False</property> <property name="layout_style">end</property> <child> <object class="GtkButton" id="buttonAddPluginDialog"> <property name="label">gtk-ok</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="use_stock">True</property> <property name="image_position">bottom</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonCloseAddPlugin"> <property name="label">gtk-cancel</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="use_stock">True</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkBox" id="box7"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkLabel" id="label7"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">30</property> <property name="margin_bottom">10</property> <property name="label" translatable="yes">Plugin:</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkComboBoxText" id="comboboxPlugins"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> </child> <action-widgets> <action-widget response="1">buttonAddPluginDialog</action-widget> <action-widget response="-1">buttonCloseAddPlugin</action-widget> </action-widgets> </object> <object class="GtkDialog" id="dialogYesNo"> <property name="can_focus">False</property> <property name="type_hint">dialog</property> <child> <placeholder/> </child> <child internal-child="vbox"> <object class="GtkBox"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox"> <property name="can_focus">False</property> <property name="margin_left">66</property> <property name="layout_style">spread</property> <child> <object class="GtkButton" id="buttonPositiveYesNoDialog"> <property name="label" translatable="yes">Yes</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonNegativeYesNoDialog"> <property name="label" translatable="yes">No</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">False</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkLabel" id="labelQuestionYesNoDialog"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">label</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> </child> <action-widgets> <action-widget response="1">buttonPositiveYesNoDialog</action-widget> <action-widget response="0">buttonNegativeYesNoDialog</action-widget> </action-widgets> </object> <object class="GtkWindow" id="mainWindow"> <property name="can_focus">False</property> <property name="title" translatable="yes">TuneD Manager</property> <signal name="destroy" handler="gtk_main_quit" swapped="no"/> <child> <placeholder/> </child> <child> <object class="GtkBox" id="box1"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkMenuBar" id="menubar1"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkMenuItem" id="menuitem1"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">_File</property> <property name="use_underline">True</property> <child type="submenu"> <object class="GtkMenu" id="menu1"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkImageMenuItem" id="imagemenuitemQuit"> <property name="label">gtk-quit</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="use_underline">True</property> <property name="use_stock">True</property> <signal name="activate" handler="gtk_main_quit" swapped="no"/> </object> </child> </object> </child> </object> </child> <child> <object class="GtkMenuItem" id="menuitem3"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">_Help</property> <property name="use_underline">True</property> <child type="submenu"> <object class="GtkMenu" id="menu3"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkImageMenuItem" id="imagemenuitemAbout"> <property name="label">gtk-about</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="use_underline">True</property> <property name="use_stock">True</property> <signal name="activate" handler="execute_about" swapped="no"/> </object> </child> </object> </child> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkNotebook" id="notebook1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="double_buffered">False</property> <property name="scrollable">True</property> <child> <object class="GtkBox" id="box3"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="resize_mode">immediate</property> <property name="orientation">vertical</property> <child> <object class="GtkBox" id="box2"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkGrid" id="grid2"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="valign">start</property> <child> <object class="GtkLabel" id="label4"> <property name="width_request">150</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="hexpand">True</property> <property name="label" translatable="yes">TuneD On Startup</property> <property name="ellipsize">end</property> <property name="angle">0.029999999999999999</property> <property name="xalign">0</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">1</property> </packing> </child> <child> <object class="GtkSwitch" id="switchTunedStartStop"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="halign">start</property> <property name="valign">center</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="hexpand">True</property> <property name="active">True</property> <signal name="notify::active" handler="execute_switch_tuned" swapped="no"/> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkSwitch" id="switchTunedStartupStartStop"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="halign">start</property> <property name="valign">center</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="hexpand">True</property> <signal name="notify::active" handler="execute_switch_tuned" swapped="no"/> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">1</property> </packing> </child> <child> <object class="GtkLabel" id="label1"> <property name="width_request">150</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="hexpand">True</property> <property name="label" translatable="yes">Start TuneD Daemon</property> <property name="ellipsize">end</property> <property name="xalign">0</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label12"> <property name="width_request">150</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="hexpand">True</property> <property name="label" translatable="yes">Admin functions</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">2</property> </packing> </child> <child> <object class="GtkSwitch" id="switchTunedAdminFunctions"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="halign">start</property> <property name="valign">center</property> <property name="hexpand">True</property> <signal name="notify::active" handler="execute_switch_tuned_admin_functions" swapped="no"/> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">2</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">False</property> <property name="padding">5</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkSeparator" id="separator4"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkGrid" id="grid3"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">center</property> <property name="valign">start</property> <child> <object class="GtkLabel" id="summaryProfileName"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">label</property> <property name="angle">0.089999999999999997</property> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label8"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <property name="label" translatable="yes">Active profile: </property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label9"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_bottom">15</property> <property name="label" translatable="yes">Included profile: </property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">1</property> </packing> </child> <child> <object class="GtkLabel" id="summaryIncludedProfileName"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_bottom">15</property> <property name="label" translatable="yes">label</property> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">2</property> </packing> </child> <child> <object class="GtkViewport" id="viewport1"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkListBox" id="listboxSummaryOfActiveProfile"> <property name="visible">True</property> <property name="can_focus">False</property> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">3</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkSeparator" id="separator5"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="tab_fill">False</property> </packing> </child> <child type="tab"> <object class="GtkLabel" id="label5"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">Summary</property> <property name="justify">fill</property> <property name="angle">0.01</property> </object> <packing> <property name="tab_fill">False</property> </packing> </child> <child> <object class="GtkBox" id="box4"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_left">5</property> <property name="margin_right">5</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="orientation">vertical</property> <child> <object class="GtkLabel" id="label11"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <property name="label" translatable="yes">Choose profile to manage:</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkGrid" id="grid6"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">center</property> <property name="valign">baseline</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <child> <object class="GtkButton" id="buttonCreateProfile"> <property name="label" translatable="yes">Create New Profile</property> <property name="width_request">200</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_right">120</property> <signal name="clicked" handler="execute_create_profile" swapped="no"/> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonUpadteSelectedProfile"> <property name="label" translatable="yes">Edit</property> <property name="width_request">200</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_right">10</property> <property name="resize_mode">queue</property> <signal name="clicked" handler="execute_update_profile" swapped="no"/> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonDeleteSelectedProfile"> <property name="label" translatable="yes">Delete</property> <property name="width_request">200</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="halign">center</property> <property name="valign">center</property> <property name="image_position">right</property> <signal name="clicked" handler="execute_remove_profile" swapped="no"/> </object> <packing> <property name="left_attach">2</property> <property name="top_attach">0</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">2</property> </packing> </child> <child> <object class="GtkScrolledWindow" id="scrolledwindow1"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="shadow_type">in</property> <child> <object class="GtkTreeView" id="treeviewProfileManager"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="vexpand">True</property> <property name="enable_grid_lines">both</property> <property name="enable_tree_lines">True</property> <property name="activate_on_single_click">True</property> <child internal-child="selection"> <object class="GtkTreeSelection" id="treeview-selection"/> </child> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">3</property> </packing> </child> </object> <packing> <property name="position">1</property> </packing> </child> <child type="tab"> <object class="GtkLabel" id="label6"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">Profiles</property> <property name="use_markup">True</property> </object> <packing> <property name="position">1</property> <property name="tab_fill">False</property> </packing> </child> <child> <object class="GtkBox" id="box9"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkComboBoxText" id="comboboxMainPlugins"> <property name="visible">True</property> <property name="can_focus">False</property> <signal name="changed" handler="on_changed_combobox_plugins" swapped="no"/> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkBox" id="box11"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkLabel" id="labelInfoValuesToSet"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">30</property> <property name="margin_bottom">10</property> <property name="label" translatable="yes">Default values and options to set</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkScrolledWindow"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="shadow_type">in</property> <property name="min_content_height">200</property> <child> <object class="GtkTextView" id="textviewPluginAvaibleText"> <property name="height_request">100</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="editable">False</property> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkLabel" id="labelPluginInfo"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">30</property> <property name="margin_bottom">10</property> <property name="label" translatable="yes">Documentation</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">2</property> </packing> </child> <child> <object class="GtkScrolledWindow"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="shadow_type">in</property> <property name="min_content_height">200</property> <child> <object class="GtkTextView" id="textviewPluginDocumentationText"> <property name="height_request">250</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="editable">False</property> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">3</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="position">2</property> </packing> </child> <child type="tab"> <object class="GtkLabel" id="label10"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="label" translatable="yes">Plugins</property> </object> <packing> <property name="position">2</property> <property name="tab_fill">False</property> </packing> </child> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkSeparator" id="separator1"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">2</property> </packing> </child> <child> <object class="GtkGrid" id="statusBar"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="hexpand">True</property> <child> <object class="GtkLabel" id="label15"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="valign">end</property> <property name="margin_left">30</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="label" translatable="yes">Actual Profile: </property> <property name="justify">fill</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label14"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="valign">end</property> <property name="margin_left">30</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="hexpand">True</property> <property name="label" translatable="yes">Recomended Profile: </property> </object> <packing> <property name="left_attach">3</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="labelActualProfile"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">start</property> <property name="valign">end</property> <property name="margin_right">40</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="label" translatable="yes">label</property> <property name="lines">75</property> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label_recommemnded_profile"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">start</property> <property name="valign">end</property> <property name="margin_right">40</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="label" translatable="yes">label</property> <property name="use_markup">True</property> </object> <packing> <property name="left_attach">4</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label13"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="valign">end</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="hexpand">True</property> <property name="label" translatable="yes">DBUS Status:</property> </object> <packing> <property name="left_attach">6</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="labelDbusStatus"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">start</property> <property name="valign">end</property> <property name="margin_right">10</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> <property name="label" translatable="yes">label</property> </object> <packing> <property name="left_attach">7</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkSeparator" id="separator10"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> </object> <packing> <property name="left_attach">2</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkSeparator" id="separator11"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> </object> <packing> <property name="left_attach">5</property> <property name="top_attach">0</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">False</property> <property name="pack_type">end</property> <property name="position">3</property> </packing> </child> <child> <object class="GtkSeparator" id="separator3"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">4</property> </packing> </child> <child> <object class="GtkGrid" id="grid1"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">2</property> <property name="margin_bottom">2</property> <child> <object class="GtkButton" id="buttonFastChangeProfile"> <property name="label" translatable="yes">Change Profile</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <signal name="clicked" handler="execute_change_profile" swapped="no"/> </object> <packing> <property name="left_attach">2</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkComboBoxText" id="comboboxtextFastChangeProfile"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="hexpand">True</property> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkLabel" id="label2"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_left">10</property> <property name="label" translatable="yes">Profile: </property> <property name="angle">0.089999999999999997</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkSpinner" id="spinnerFastChangeProfile"> <property name="width_request">20</property> <property name="height_request">20</property> <property name="can_focus">False</property> <property name="margin_left">5</property> <property name="margin_right">5</property> <property name="hexpand">False</property> <property name="vexpand">False</property> </object> <packing> <property name="left_attach">3</property> <property name="top_attach">0</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">5</property> </packing> </child> <child> <object class="GtkSeparator" id="separator2"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">6</property> </packing> </child> </object> </child> </object> <object class="GtkMessageDialog" id="messagedialog1"> <property name="can_focus">False</property> <property name="type_hint">dialog</property> <child internal-child="vbox"> <object class="GtkBox" id="messagedialog-vbox3"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox" id="messagedialog-action_area3"> <property name="can_focus">False</property> <property name="layout_style">end</property> <child> <placeholder/> </child> <child> <placeholder/> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">0</property> </packing> </child> </object> </child> </object> <object class="GtkMessageDialog" id="messagedialogOperationError"> <property name="can_focus">False</property> <property name="type_hint">dialog</property> <property name="message_type">error</property> <property name="buttons">close</property> <property name="text" translatable="yes"> </property> <child internal-child="vbox"> <object class="GtkBox" id="messagedialog-vbox"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox" id="messagedialog-action_area"> <property name="can_focus">False</property> <property name="layout_style">end</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">0</property> </packing> </child> </object> </child> </object> <object class="GtkTreeStore" id="treestoreActualPlugins"/> <object class="GtkTreeView" id="treeviewActualPlugins"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="model">treestoreActualPlugins</property> <signal name="row-activated" handler="on_treeview_click" swapped="no"/> <child internal-child="selection"> <object class="GtkTreeSelection" id="treeview-selection2"/> </child> </object> <object class="GtkDialog" id="tunedDaemonExceptionDialog"> <property name="can_focus">False</property> <property name="border_width">5</property> <property name="type_hint">dialog</property> <child> <placeholder/> </child> <child internal-child="vbox"> <object class="GtkBox" id="dialog-vbox1"> <property name="can_focus">False</property> <property name="orientation">vertical</property> <property name="spacing">2</property> <child internal-child="action_area"> <object class="GtkButtonBox" id="dialog-action_area1"> <property name="can_focus">False</property> <property name="layout_style">end</property> <child> <object class="GtkButton" id="turn_daemon_on_button"> <property name="label" translatable="yes">Turn On</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkButton" id="cancel_button"> <property name="label" translatable="yes">Exit</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <signal name="clicked" handler="HideTunedDaemonExceptionDialog" swapped="no"/> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkLabel" id="label3"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">50</property> <property name="label" translatable="yes">Daemon tuned is not running.</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> </object> </child> <action-widgets> <action-widget response="0">turn_daemon_on_button</action-widget> <action-widget response="1">cancel_button</action-widget> </action-widgets> </object> <object class="GtkWindow" id="windowProfileEditor"> <property name="can_focus">False</property> <property name="title" translatable="yes">Profile Editor</property> <property name="destroy_with_parent">True</property> <signal name="delete-event" handler="on_delete_event" swapped="no"/> <child> <placeholder/> </child> <child> <object class="GtkGrid" id="grid4"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkLabel" id="labelUpdateProfile"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <property name="label" translatable="yes">Create Profile</property> <property name="ellipsize">start</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkGrid" id="grid5"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkBox" id="box5"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkButton" id="buttonAddPlugin"> <property name="label" translatable="yes">Add Plugin</property> <property name="width_request">160</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">20</property> <property name="margin_right">20</property> <property name="margin_top">12</property> <property name="margin_bottom">13</property> <signal name="clicked" handler="execute_add_plugin_to_notebook" swapped="no"/> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkButton" id="buttonRemovePlugin"> <property name="label" translatable="yes">Remove Plugin</property> <property name="width_request">160</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">20</property> <property name="margin_right">20</property> <signal name="clicked" handler="execute_remove_plugin_from_notebook" swapped="no"/> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">2</property> </packing> </child> <child> <object class="GtkBox" id="box6"> <property name="width_request">160</property> <property name="height_request">0</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">150</property> <property name="margin_bottom">150</property> <property name="orientation">vertical</property> <child> <object class="GtkSeparator" id="separator8"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonOpenRaw"> <property name="label" translatable="yes">Raw Editor</property> <property name="width_request">160</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="halign">center</property> <property name="margin_top">10</property> <property name="margin_bottom">10</property> <signal name="clicked" handler="execute_open_raw_button" swapped="no"/> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkSeparator" id="separator9"> <property name="visible">True</property> <property name="can_focus">False</property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">2</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">3</property> </packing> </child> <child> <object class="GtkButton" id="buttonConfirmProfileCreate"> <property name="label" translatable="yes">Confirm</property> <property name="width_request">160</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">20</property> <property name="margin_right">20</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <signal name="clicked" handler="on_click_button_confirm_profile_create" swapped="no"/> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">4</property> </packing> </child> <child> <object class="GtkButton" id="buttonConfirmProfileUpdate"> <property name="label" translatable="yes">Confirm</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">20</property> <property name="margin_right">20</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <signal name="clicked" handler="on_click_button_confirm_profile_update" swapped="no"/> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">5</property> </packing> </child> <child> <object class="GtkButton" id="buttonCancel"> <property name="label">gtk-cancel</property> <property name="width_request">160</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">20</property> <property name="margin_right">20</property> <property name="margin_bottom">20</property> <property name="use_stock">True</property> <signal name="clicked" handler="execute_cancel_window_profile_editor" swapped="no"/> </object> <packing> <property name="expand">True</property> <property name="fill">True</property> <property name="position">6</property> </packing> </child> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkNotebook" id="notebookPlugins"> <property name="width_request">500</property> <property name="height_request">450</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="margin_left">10</property> <property name="margin_right">10</property> <property name="margin_top">10</property> <property name="margin_bottom">10</property> <property name="hexpand">True</property> <property name="vexpand">True</property> <property name="scrollable">True</property> <child> <object class="GtkScrolledWindow" id="scrolledwindow3"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="shadow_type">in</property> <child> <placeholder/> </child> </object> </child> <child type="tab"> <placeholder/> </child> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">4</property> </packing> </child> <child> <object class="GtkSeparator" id="separator6"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">10</property> <property name="margin_bottom">10</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">1</property> </packing> </child> <child> <object class="GtkGrid" id="grid7"> <property name="visible">True</property> <property name="can_focus">False</property> <child> <object class="GtkLabel" id="label23"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_left">20</property> <property name="label" translatable="yes">Profile Name: </property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkEntry" id="entryProfileName"> <property name="width_request">260</property> <property name="visible">True</property> <property name="can_focus">True</property> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkComboBox" id="comboboxIncludeProfile"> <property name="width_request">150</property> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <property name="hexpand">True</property> </object> <packing> <property name="left_attach">2</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkToggleButton" id="togglebuttonIncludeProfile"> <property name="label" translatable="yes">Include</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="halign">start</property> <property name="valign">end</property> <property name="margin_left">10</property> <property name="margin_right">10</property> <property name="hexpand">True</property> </object> <packing> <property name="left_attach">3</property> <property name="top_attach">0</property> </packing> </child> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">2</property> </packing> </child> <child> <object class="GtkSeparator" id="separator7"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">10</property> <property name="margin_bottom">10</property> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">3</property> </packing> </child> </object> </child> </object> <object class="GtkWindow" id="windowProfileEditorRaw"> <property name="can_focus">False</property> <signal name="delete-event" handler="on_delete_event" swapped="no"/> <child> <placeholder/> </child> <child> <object class="GtkBox" id="box8"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="orientation">vertical</property> <child> <object class="GtkLabel" id="label17"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="margin_top">20</property> <property name="margin_bottom">20</property> <property name="label" translatable="yes">Profile Config: </property> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">0</property> </packing> </child> <child> <object class="GtkScrolledWindow" id="scrolledwindow2"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="shadow_type">in</property> <child> <object class="GtkTextView" id="textviewProfileConfigRaw"> <property name="visible">True</property> <property name="can_focus">True</property> <property name="margin_left">5</property> <property name="margin_right">5</property> <property name="margin_top">5</property> <property name="margin_bottom">5</property> </object> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="position">1</property> </packing> </child> <child> <object class="GtkGrid" id="grid9"> <property name="visible">True</property> <property name="can_focus">False</property> <property name="halign">end</property> <child> <object class="GtkButton" id="buttonApply"> <property name="label">gtk-apply</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">10</property> <property name="margin_right">10</property> <property name="margin_top">10</property> <property name="margin_bottom">10</property> <property name="use_stock">True</property> <signal name="clicked" handler="execute_apply_window_profile_editor_raw" swapped="no"/> </object> <packing> <property name="left_attach">0</property> <property name="top_attach">0</property> </packing> </child> <child> <object class="GtkButton" id="buttonCancelRaw"> <property name="label">gtk-close</property> <property name="visible">True</property> <property name="can_focus">True</property> <property name="receives_default">True</property> <property name="margin_left">10</property> <property name="margin_right">10</property> <property name="margin_top">10</property> <property name="margin_bottom">10</property> <property name="use_stock">True</property> <signal name="clicked" handler="execute_cancel_window_profile_editor_raw" swapped="no"/> </object> <packing> <property name="left_attach">1</property> <property name="top_attach">0</property> </packing> </child> </object> <packing> <property name="expand">False</property> <property name="fill">True</property> <property name="pack_type">end</property> <property name="position">2</property> </packing> </child> </object> </child> </object> </interface> 07070100000110000081ED0000000000000000000000016391BC3A000071DE000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/tuned-gui.py#!/usr/bin/python3 # -*- coding: utf-8 -*- # Copyright (C) 2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada <jskarvad@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Oct 15, 2013 @author: mstana ''' from __future__ import print_function try: import gi except ImportError: raise ImportError("Gtk3 backend requires pygobject to be installed.") try: gi.require_version("Gtk", "3.0") except AttributeError: raise ImportError( "pygobject version too old -- it must have require_version") except ValueError: raise ImportError( "Gtk3 backend requires the GObject introspection bindings for Gtk 3 " "to be installed.") try: from gi.repository import Gtk, GObject except ImportError: raise ImportError("Gtk3 backend requires pygobject to be installed.") import sys import os import time import collections import subprocess import tuned.logs import tuned.consts as consts import tuned.version as version import tuned.admin.dbus_controller import tuned.gtk.gui_profile_loader import tuned.gtk.gui_plugin_loader import tuned.profiles.profile as profile import tuned.utils.global_config from tuned.gtk.tuned_dialog import TunedDialog from tuned.gtk.managerException import ManagerException EXECNAME = '/usr/sbin/tuned-gui' GLADEUI = '/usr/share/tuned/ui/tuned-gui.glade' LICENSE = 'GNU GPL version 2 or later <http://gnu.org/licenses/gpl.html>' NAME = 'tuned' VERSION = 'tuned ' + str(version.TUNED_VERSION_MAJOR) + '.' + \ str(version.TUNED_VERSION_MINOR) + '.' + str(version.TUNED_VERSION_PATCH) COPYRIGHT = 'Copyright (C) 2014 Red Hat, Inc.' AUTHORS = [ '', 'Marek Staňa', 'Jaroslav Škarvada <jskarvad@redhat.com>', ] class Base(object): """ GUI class for program TuneD. """ is_admin = False def _starting(self): try: self.controller = \ tuned.admin.DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT) self.controller.is_running() except tuned.admin.exceptions.TunedAdminDBusException as ex: response = self._gobj('tunedDaemonExceptionDialog').run() if response == 0: # button Turn ON pressed # switch_tuned_start_stop notify the switch which call funcion start_tuned self._start_tuned() self._gobj('tunedDaemonExceptionDialog').hide() return True else: self.error_dialog('TuneD is shutting down.', 'Reason: missing communication with TuneD daemon.' ) return False return True def __init__(self): self.active_profile = None self.config = tuned.utils.global_config.GlobalConfig() self.builder = Gtk.Builder() try: self.builder.add_from_file(GLADEUI) except GObject.GError as e: print("Error loading '%s'" % GLADEUI, file=sys.stderr) sys.exit(1) # # DIALOGS # self.builder.connect_signals(self) if not self._starting(): return self.manager = tuned.gtk.gui_profile_loader.GuiProfileLoader( tuned.consts.LOAD_DIRECTORIES) self.plugin_loader = tuned.gtk.gui_plugin_loader.GuiPluginLoader() self._build_about_dialog() # # SET WIDGETS # self.treestore_profiles = Gtk.ListStore(GObject.TYPE_STRING, GObject.TYPE_STRING) self.treestore_plugins = Gtk.ListStore(GObject.TYPE_STRING) for plugin_name in self.plugin_loader.plugins: self.treestore_plugins.append([plugin_name]) self._gobj('comboboxPlugins').set_model(self.treestore_plugins) self._gobj('comboboxMainPlugins').set_model(self.treestore_plugins) self._gobj('comboboxIncludeProfile').set_model(self.treestore_profiles) cell = Gtk.CellRendererText() self._gobj('comboboxIncludeProfile').pack_start(cell, True) self._gobj('comboboxIncludeProfile').add_attribute(cell, 'text', 0) self._gobj('treeviewProfileManager').append_column(Gtk.TreeViewColumn('Type' , Gtk.CellRendererText(), text=1)) self._gobj('treeviewProfileManager').append_column(Gtk.TreeViewColumn('Name' , Gtk.CellRendererText(), text=0)) self._gobj('treeviewProfileManager').set_model(self.treestore_profiles) self._update_profile_list() self._gobj('treeviewProfileManager').get_selection().select_path(0) self._gobj('labelActualProfile').set_text(self.controller.active_profile()) if self.config.get(consts.CFG_RECOMMEND_COMMAND): self._gobj('label_recommemnded_profile').set_text(self.controller.recommend_profile()) self.data_for_listbox_summary_of_active_profile() self._gobj('comboboxtextFastChangeProfile').set_model(self.treestore_profiles) self._gobj('labelDbusStatus').set_text(str(bool(self.controller.is_running()))) self._gobj('switchTunedStartupStartStop').set_active( self.service_run_on_start_up('tuned')) self._gobj('switchTunedAdminFunctions').set_active(self.is_admin) self._gobj('comboboxtextFastChangeProfile').set_active(self.get_iter_from_model_by_name(self._gobj('comboboxtextFastChangeProfile').get_model(), self.controller.active_profile())) self.editing_profile_name = None # self.treeview_profile_manager.connect('row-activated',lambda x,y,z: self.execute_update_profile(x,y)) # TO DO: need to be fixed! - double click on treeview self._gobj('mainWindow').show() Gtk.main() def get_iter_from_model_by_name(self, model, item_name): ''' Return iter from model selected by name of item in this model ''' model = self._gobj('comboboxIncludeProfile').get_model() selected = 0 for item in model: try: if item[0] == item_name: selected = int(item.path.to_string()) except KeyError: pass return selected def is_tuned_connection_ok(self): """ Result True, False depends on if tuned daemon is running. If its not runing this method try to start tuned. """ try: self.controller.is_running() return True except tuned.admin.exceptions.TunedAdminDBusException: response = self._gobj('tunedDaemonExceptionDialog').run() if response == 0: # button Turn ON pressed # switch_tuned_start_stop notify the switch which call funcion start_tuned try: self._start_tuned() self._gobj('tunedDaemonExceptionDialog').hide() self._gobj('switchTunedStartStop').set_active(True) return True except: self._gobj('tunedDaemonExceptionDialog').hide() return False else: self._gobj('tunedDaemonExceptionDialog').hide() return False def data_for_listbox_summary_of_active_profile(self): """ This add rows to object listbox_summary_of_active_profile. Row consist of grid. Inside grid on first possition is label, second possition is vertical grid. label = name of plugin verical grid consist of labels where are stored values for plugin option and value. This method is emited after change profile and on startup of app. """ for row in self._gobj('listboxSummaryOfActiveProfile'): self._gobj('listboxSummaryOfActiveProfile').remove(row) if self.is_tuned_connection_ok(): self.active_profile = \ self.manager.get_profile(self.controller.active_profile()) else: self.active_profile = None self._gobj('summaryProfileName').set_text(self.active_profile.name) try: self._gobj('summaryIncludedProfileName').set_text(self.active_profile.options['include' ]) except: # keyerror probably self._gobj('summaryIncludedProfileName').set_text('None') row = Gtk.ListBoxRow() box = Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL, spacing=0) plugin_name = Gtk.Label() plugin_name.set_markup('<b>Plugin Name</b>') plugin_option = Gtk.Label() plugin_option.set_markup('<b>Plugin Options</b>') box.pack_start(plugin_name, True, True, 0) box.pack_start(plugin_option, True, True, 0) row.add(box) self._gobj('listboxSummaryOfActiveProfile').add(row) sep = Gtk.Separator.new(Gtk.Orientation.HORIZONTAL) self._gobj('listboxSummaryOfActiveProfile').add(sep) sep.show() for u in self.active_profile.units: row = Gtk.ListBoxRow() hbox = Gtk.Box(orientation=Gtk.Orientation.HORIZONTAL, spacing=0) hbox.set_homogeneous(True) row.add(hbox) label = Gtk.Label() label.set_markup(u) label.set_justify(Gtk.Justification.LEFT) hbox.pack_start(label, False, True, 1) grid = Gtk.Box(orientation=Gtk.Orientation.VERTICAL, spacing=0) grid.set_homogeneous(True) for o in self.active_profile.units[u].options: label_option = Gtk.Label() label_option.set_markup(o + ' = ' + '<b>' + self.active_profile.units[u].options[o] + '</b>') grid.pack_start(label_option, False, True, 0) hbox.pack_start(grid, False, True, 0) self._gobj('listboxSummaryOfActiveProfile').add(row) separator = Gtk.Separator.new(Gtk.Orientation.HORIZONTAL) self._gobj('listboxSummaryOfActiveProfile').add(separator) separator.show() self._gobj('listboxSummaryOfActiveProfile').show_all() # def on_treeview_button_press_event(self, treeview, event): # popup = Gtk.Menu() # popup.append(Gtk.MenuItem('add')) # popup.append(Gtk.MenuItem('delete')) # # if event.button == 3: # x = int(event.x) # y = int(event.y) # time = event.time # pthinfo = treeview.get_path_at_pos(x, y) # if pthinfo is not None: # path, col, cellx, celly = pthinfo # treeview.grab_focus() # treeview.set_cursor(path, col, 0) # popup.popup(None, None, lambda menu, data: # (event.get_root_coords()[0], event.get_root_coords()[1], True), None, event.button, event.time) # return True def on_changed_combobox_plugins(self, combo): plugin_name = self._gobj('comboboxMainPlugins').get_active_text() plugin_parameters = self.plugin_loader.plugins.get(plugin_name, None) if plugin_parameters is None: self._gobj('textviewPluginAvaibleText').get_buffer().set_text('') self._gobj('textviewPluginDocumentationText').get_buffer().set_text('' ) return plugin_hints = self.plugin_loader.get_plugin_hints(plugin_name) options = '' for (key, val) in plugin_parameters.items(): options += '%s = %r\n' % (key, str(val)) hint = plugin_hints.get(key, None) if hint: options += '%s\n' % (str(hint)) self._gobj('textviewPluginAvaibleText').get_buffer().set_text(options) plugin_doc = self.plugin_loader.get_plugin_doc(plugin_name) self._gobj('textviewPluginDocumentationText').get_buffer().set_text( plugin_doc ) def on_delete_event(self, window, data): window.hide() return True def _get_active_profile_name(self): return self.manager.get_profile(self.controller.active_profile()).name def execute_remove_profile(self, button): profile = self.get_treeview_selected() try: if self._get_active_profile_name() == profile: self.error_dialog('You can not remove active profile', 'Please deactivate profile by choosind another!' ) return if profile is None: self.error_dialog('No profile selected!', '') return if self._gobj('windowProfileEditor').is_active(): self.error_dialog('You are ediding ' + self.editing_profile_name + ' profile.', 'Please close edit window and try again.' ) return try: self.manager.remove_profile(profile, is_admin=self.is_admin) except ManagerException: self.error_dialog('failed to authorize', '') return for item in self.treestore_profiles: if item[0] == profile: iter = self.treestore_profiles.get_iter(item.path) self.treestore_profiles.remove(iter) except ManagerException as ex: self.error_dialog('Profile can not be remove', ex.__str__()) def execute_cancel_window_profile_editor(self, button): self._gobj('windowProfileEditor').hide() def execute_cancel_window_profile_editor_raw(self, button): self._gobj('windowProfileEditorRaw').hide() def execute_open_raw_button(self, button): profile_name = self.get_treeview_selected() text_buffer = self._gobj('textviewProfileConfigRaw').get_buffer() text_buffer.set_text(self.manager.get_raw_profile(profile_name)) self._gobj('windowProfileEditorRaw').show_all() def execute_add_plugin_to_notebook(self, button): if self.choose_plugin_dialog() == 1: plugin_name = self._gobj('comboboxPlugins').get_active_text() plugin_to_tab = None for plugin in self.plugin_loader.plugins: if plugin == plugin_name: for children in self._gobj('notebookPlugins'): if plugin_name \ == self._gobj('notebookPlugins').get_menu_label_text(children): self.error_dialog('Plugin ' + plugin_name + ' is already in profile.', '') return config_options = self.plugin_loader.plugins[plugin] self._gobj('notebookPlugins').append_page_menu( self.treeview_for_data( config_options, plugin), Gtk.Label(plugin), Gtk.Label(plugin) ) self._gobj('notebookPlugins').show_all() def execute_remove_plugin_from_notebook(self, data): treestore = Gtk.ListStore(GObject.TYPE_STRING) for children in self._gobj('notebookPlugins').get_children(): treestore.append([self._gobj('notebookPlugins').get_menu_label_text(children)]) self._gobj('comboboxPlugins').set_model(treestore) response_of_dialog = self.choose_plugin_dialog() if response_of_dialog == 1: # ok button pressed selected = self._gobj('comboboxPlugins').get_active_text() for children in self._gobj('notebookPlugins').get_children(): if self._gobj('notebookPlugins').get_menu_label_text(children) \ == selected: self._gobj('notebookPlugins').remove(children) self._gobj('comboboxPlugins').set_model(self.treestore_plugins) def execute_apply_window_profile_editor_raw(self, data): text_buffer = self._gobj('textviewProfileConfigRaw').get_buffer() start = text_buffer.get_start_iter() end = text_buffer.get_end_iter() profile_name = self.get_treeview_selected() try: self.manager.set_raw_profile(profile_name, text_buffer.get_text(start, end, True)) except Exception: self.error_dialog('Error while parsing raw configuration') return self.error_dialog('Profile Editor will be closed.', 'for next updates reopen profile.') self._gobj('windowProfileEditor').hide() self._gobj('windowProfileEditorRaw').hide() # refresh window_profile_editor def execute_create_profile(self, button): self.reset_values_window_edit_profile() self._gobj('buttonConfirmProfileCreate').show() self._gobj('buttonConfirmProfileUpdate').hide() self._gobj('buttonOpenRaw').hide() for child in self._gobj('notebookPlugins').get_children(): self._gobj('notebookPlugins').remove(child) self._gobj('windowProfileEditor').show() def reset_values_window_edit_profile(self): self._gobj('entryProfileName').set_text('') self._gobj('comboboxIncludeProfile').set_active(0) for child in self._gobj('notebookPlugins').get_children(): self._gobj('notebookPlugins').remove(child) def get_treeview_selected(self): """ Return value of treeview which is selected at calling moment of this function. """ selection = self._gobj('treeviewProfileManager').get_selection() (model, iter) = selection.get_selected() if iter is None: self.error_dialog('No profile selected', '') return self.treestore_profiles.get_value(iter, 0) def on_click_button_confirm_profile_update(self, data): profile_name = self.get_treeview_selected() prof = self.data_to_profile_config() for item in self.treestore_profiles: try: if item[0] == profile_name: iter = self.treestore_profiles.get_iter(item.path) self.treestore_profiles.remove(iter) except KeyError: raise KeyError('this cant happen') try: self.manager.update_profile(profile_name, prof, self.is_admin) except ManagerException: self.error_dialog('failed to authorize', '') return if self.manager.is_profile_factory(prof.name): prefix = consts.PREFIX_PROFILE_FACTORY else: prefix = consts.PREFIX_PROFILE_USER self.treestore_profiles.append([prof.name, prefix]) self._gobj('windowProfileEditor').hide() def data_to_profile_config(self): name = self._gobj('entryProfileName').get_text() config = collections.OrderedDict() activated = self._gobj('comboboxIncludeProfile').get_active() model = self._gobj('comboboxIncludeProfile').get_model() include = model[activated][0] if self._gobj('togglebuttonIncludeProfile').get_active(): config['main'] = {'include': include} for children in self._gobj('notebookPlugins'): acumulate_options = {} for item in children.get_model(): if item[0] != 'None': acumulate_options[item[1]] = item[0] config[self._gobj('notebookPlugins').get_menu_label_text(children)] = \ acumulate_options return profile.Profile(name, config) def on_click_button_confirm_profile_create(self, data): # try: prof = self.data_to_profile_config() try: self.manager.save_profile(prof) except ManagerException: self.error_dialog('failed to authorize', '') return self.manager._load_all_profiles() self.treestore_profiles.append([prof.name, consts.PREFIX_PROFILE_USER]) self._gobj('windowProfileEditor').hide() # except ManagerException: # self.error_dialog("Profile with name " + prof.name # + " already exist.", "Please choose another name for profile") def execute_update_profile(self, data): # if (self.treeview_profile_manager.get_activate_on_single_click()): # print "returning" # print self.treeview_profile_manager.get_activate_on_single_click() # return self._gobj('buttonConfirmProfileCreate').hide() self._gobj('buttonConfirmProfileUpdate').show() self._gobj('buttonOpenRaw').show() label_update_profile = \ self.builder.get_object('labelUpdateProfile') label_update_profile.set_text('Update Profile') for child in self._gobj('notebookPlugins').get_children(): self._gobj('notebookPlugins').remove(child) self.editing_profile_name = self.get_treeview_selected() if self.editing_profile_name is None: self.error_dialog('No profile Selected', 'To update profile please select profile.' ) return if self._get_active_profile_name() == self.editing_profile_name: self.error_dialog('You can not update active profile', 'Please deactivate profile by choosing another!' ) return copied_profile = None if not self.manager.is_profile_removable(self.editing_profile_name): if not self.manager.get_profile( self.editing_profile_name + '-modified'): if not TunedDialog('System profile can not be modified ' + 'but you can create its copy', 'create copy', 'cancel' ).run(): return copied_profile = self.manager.get_profile( self.editing_profile_name) copied_profile.name = self.editing_profile_name + '-modified' try: self.manager.save_profile(copied_profile) except ManagerException: self.error_dialog('failed to authorize', '') return else: if not TunedDialog('System profile can not be modified ' + 'but you can use its copy', 'open copy', 'cancel' ).run(): return copied_profile = self.manager.get_profile( self.editing_profile_name + '-modified') self._update_profile_list() for row in range(len(self.treestore_profiles)): if self.treestore_profiles[row][0] == self.editing_profile_name + '-modified': self._gobj('treeviewProfileManager').get_selection().select_path(row) break profile = copied_profile or self.manager.get_profile(self.editing_profile_name) self._gobj('entryProfileName').set_text(profile.name) model = self._gobj('comboboxIncludeProfile').get_model() selected = 0 self._gobj('togglebuttonIncludeProfile').set_active(False) for item in model: try: if item[0] == profile.options['include']: selected = int(item.path.to_string()) self._gobj('togglebuttonIncludeProfile').set_active(True) except KeyError: pass # profile dont have include section self._gobj('comboboxIncludeProfile').set_active(selected) # load all values not just normal for (name, unit) in list(profile.units.items()): self._gobj('notebookPlugins').append_page_menu(self.treeview_for_data(unit.options, unit.name), Gtk.Label(unit.name), Gtk.Label(unit.name)) self._gobj('notebookPlugins').show_all() self._gobj('windowProfileEditor').show() def treeview_for_data(self, data, plugin_name): """ This prepare treestore and treeview for data and return treeview """ treestore = Gtk.ListStore(GObject.TYPE_STRING, GObject.TYPE_STRING) for (option, value) in list(data.items()): treestore.append([str(value), option]) treeview = Gtk.TreeView(treestore) renderer = Gtk.CellRendererText() column_option = Gtk.TreeViewColumn('Option', renderer, text=0) column_value = Gtk.TreeViewColumn('Value', renderer, text=1) treeview.append_column(column_value) treeview.append_column(column_option) treeview.enable_grid_lines = True treeview.connect('row-activated', self.change_value_dialog) treeview.connect('button_press_event', self.on_treeview_click) treeview.set_property('has-tooltip',True) model = treeview.get_model() treeview.connect( 'query-tooltip', lambda widget, x, y, keyboard_mode, tooltip: self.on_option_tooltip(widget, x, y, keyboard_mode, tooltip, plugin_name, model ) ) return treeview def execute_change_profile(self, button): """ Change profile in main window. """ self._gobj('spinnerFastChangeProfile').show() self._gobj('spinnerFastChangeProfile').start() if button is not None: text = \ self._gobj('comboboxtextFastChangeProfile').get_active_text() if text is not None: if self.is_tuned_connection_ok(): self.controller.switch_profile(text) self._gobj('labelActualProfile').set_text(self.controller.active_profile()) self.data_for_listbox_summary_of_active_profile() self.active_profile = \ self.manager.get_profile(self.controller.active_profile()) else: self._gobj('labelActualProfile').set_text('') else: self.error_dialog('No profile selected', '') self._gobj('spinnerFastChangeProfile').stop() self._gobj('spinnerFastChangeProfile').hide() def execute_switch_tuned(self, switch, data): """ Suported switch_tuned_start_stop and switch_tuned_startup_start_stop. """ if switch == self._gobj('switchTunedStartStop'): # starts or stop tuned daemon if self._gobj('switchTunedStartStop').get_active(): self.is_tuned_connection_ok() else: self._su_execute(['service', 'tuned', 'stop']) self.error_dialog('TuneD Daemon is turned off', 'Support of tuned is not running.') elif switch == self._gobj('switchTunedStartupStartStop'): # switch option for start tuned on start up if self._gobj('switchTunedStartupStartStop').get_active(): self._su_execute(['systemctl', 'enable', 'tuned']) else: self._su_execute(['systemctl', 'disable', 'tuned']) else: raise NotImplementedError() def execute_switch_tuned_admin_functions(self, switch, data): self.is_admin = self._gobj('switchTunedAdminFunctions').get_active() def service_run_on_start_up(self, service): """ Depends on if tuned is set to run on startup of system return true if yes, else return false """ temp = self._execute(['systemctl', 'is-enabled', service, '-q']) if temp == 0: return True return False def error_dialog(self, error, info): """ General error dialog with two fields. Primary and secondary text fields. """ self._gobj('messagedialogOperationError').set_markup(error) self._gobj('messagedialogOperationError').format_secondary_text(info) self._gobj('messagedialogOperationError').run() self._gobj('messagedialogOperationError').hide() def execute_about(self, widget): self.about_dialog.run() self.about_dialog.hide() def change_value_dialog( self, tree_view, path, treeview_column, ): """ Shows up dialog after double click on treeview which has to be stored in notebook of plugins. Th``` dialog allows you to chagne specific option's value in plugin. """ model = tree_view.get_model() dialog = self.builder.get_object('changeValueDialog') button_apply = self.builder.get_object('buttonApplyChangeValue') button_cancel = self.builder.get_object('buttonCancel1') entry1 = self.builder.get_object('entry1') text = self.builder.get_object('labelTextDialogChangeValue') text.set_text(model.get_value(model.get_iter(path), 1)) text = model.get_value(model.get_iter(path), 0) if text is not None: entry1.set_text(text) else: entry1.set_text('') dialog.connect('destroy', lambda d: dialog.hide()) button_cancel.connect('clicked', lambda d: dialog.hide()) if dialog.run() == 1: model.set_value(model.get_iter(path), 0, entry1.get_text()) dialog.hide() def choose_plugin_dialog(self): """ Shows up dialog with combobox where are stored plugins available to add. """ self._gobj('comboboxPlugins').set_active(0) self.button_add_plugin_dialog = \ self.builder.get_object('buttonAddPluginDialog') self.button_cancel_add_plugin_dialog = \ self.builder.get_object('buttonCloseAddPlugin') self.button_cancel_add_plugin_dialog.connect('clicked', lambda d: self._gobj('dialogAddPlugin').hide()) self._gobj('dialogAddPlugin').connect('destroy', lambda d: \ self._gobj('dialogAddPlugin').hide()) response = self._gobj('dialogAddPlugin').run() self._gobj('dialogAddPlugin').hide() return response def on_treeview_click(self, treeview, event): if event.button == 3: popup = Gtk.Menu() popup.append(Gtk.MenuItem('add')) popup.append(Gtk.MenuItem('delete')) time = event.time self._gobj('menuAddPluginValue').popup( None, None, None, None, event.button, time, ) return True @staticmethod def liststore_contains_item(liststore, item): for liststore_item in liststore: if liststore_item[1] == item: return True return False def _start_tuned(self): self._su_execute(['service', 'tuned', 'start']) time.sleep(10) self.controller = tuned.admin.DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT) def _gobj(self, id): """ Wrapper for self.builder.get_object """ return self.builder.get_object(id) def HideTunedDaemonExceptionDialog(self, sender): self._gobj('tunedDaemonExceptionDialog').hide() def _build_about_dialog(self): self.about_dialog = Gtk.AboutDialog.new() self.about_dialog.set_name(NAME) self.about_dialog.set_version(VERSION) self.about_dialog.set_license(LICENSE) self.about_dialog.set_wrap_license(True) self.about_dialog.set_copyright(COPYRIGHT) self.about_dialog.set_authors(AUTHORS) def _update_profile_list(self): self.treestore_profiles.clear() for profile_name in self.manager.get_names(): if self.manager.is_profile_factory(profile_name): self.treestore_profiles.append([profile_name, consts.PREFIX_PROFILE_FACTORY]) else: self.treestore_profiles.append([profile_name, consts.PREFIX_PROFILE_USER]) self._gobj('comboboxIncludeProfile').set_model(self.treestore_profiles) self._gobj('comboboxtextFastChangeProfile').set_model( self.treestore_profiles) self._gobj('treeviewProfileManager').set_model(self.treestore_profiles) self._gobj('comboboxtextFastChangeProfile').set_active( self.get_iter_from_model_by_name( self._gobj('comboboxtextFastChangeProfile').get_model(), self.controller.active_profile())) def _su_execute(self, args): args = ['pkexec'] + args rc = subprocess.call(args) return rc def _execute(self, args): rc = subprocess.call(args) return rc def on_option_tooltip(self, widget, x, y, keyboard_mode, tooltip, plugin_name, model): path = widget.get_path_at_pos(x, y) plugin_hints = self.plugin_loader.get_plugin_hints(plugin_name) if plugin_hints is None or len(plugin_hints) == 0: return False if not path: row_count = model.iter_n_children(None) if (row_count == 0): return False iterator = model.get_iter(row_count - 1) option_name = model.get_value(iterator, 1) elif int(str(path[0])) < 1: return False else: iterator = model.get_iter(int(str(path[0])) -1) option_name = model.get_value(iterator, 1) path = model.get_path(iterator) hint = plugin_hints.get(option_name, None) if not hint: return False tooltip.set_text(hint) widget.set_tooltip_row(tooltip, path) return True def gtk_main_quit(self, sender): Gtk.main_quit() if __name__ == '__main__': base = Base() 07070100000111000081A40000000000000000000000016391BC3A000006C2000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/tuned-main.conf# Global tuned configuration file. # Whether to use daemon. Without daemon it just applies tuning. It is # not recommended, because many functions don't work without daemon, # e.g. there will be no D-Bus, no rollback of settings, no hotplug, # no dynamic tuning, ... daemon = 1 # Dynamicaly tune devices, if disabled only static tuning will be used. dynamic_tuning = 1 # How long to sleep before checking for events (in seconds) # higher number means lower overhead but longer response time. sleep_interval = 1 # Update interval for dynamic tunings (in seconds). # It must be multiply of the sleep_interval. update_interval = 10 # Recommend functionality, if disabled "recommend" command will be not # available in CLI, daemon will not parse recommend.conf but will return # one hardcoded profile (by default "balanced"). recommend_command = 1 # Whether to reapply sysctl from /run/sysctl.d/, /etc/sysctl.d/ and # /etc/sysctl.conf. If enabled, these sysctls will be re-appliead # after TuneD sysctls are applied, i.e. TuneD sysctls will not # override user-provided system sysctls. reapply_sysctl = 1 # Default priority assigned to instances default_instance_priority = 0 # Udev buffer size udev_buffer_size = 1MB # Log file count log_file_count = 2 # Log file max size log_file_max_size = 1MB # Preset system uname string for architecture specific tuning. # It can be used to force tuning for specific architecture. # If commented, "uname" will be called to fill its content. # uname_string = x86_64 # Preset system cpuinfo string for architecture specific tuning. # It can be used to force tuning for specific architecture. # If commented, "/proc/cpuinfo" will be read to fill its content. # cpuinfo_string = Intel 07070100000112000081ED0000000000000000000000016391BC3A00000D2C000000000000000000000000000000000000002500000000tuned-2.19.0.29+git.b894a3e/tuned.py#!/usr/bin/python3 -Es # # tuned: daemon for monitoring and adaptive tuning of system devices # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # from __future__ import print_function import argparse import os import sys import traceback import tuned.logs import tuned.daemon import tuned.exceptions import tuned.consts as consts import tuned.version as ver from tuned.utils.global_config import GlobalConfig def error(message): print(message, file=sys.stderr) if __name__ == "__main__": parser = argparse.ArgumentParser(description = "Daemon for monitoring and adaptive tuning of system devices.") parser.add_argument("--daemon", "-d", action = "store_true", help = "run on background") parser.add_argument("--debug", "-D", action = "store_true", help = "show/log debugging messages") parser.add_argument("--log", "-l", nargs = "?", const = consts.LOG_FILE, help = "log to file, default file: " + consts.LOG_FILE) parser.add_argument("--pid", "-P", nargs = "?", const = consts.PID_FILE, help = "write PID file, default file: " + consts.PID_FILE) parser.add_argument("--no-dbus", action = "store_true", help = "do not attach to DBus") parser.add_argument("--profile", "-p", action = "store", type=str, metavar = "name", help = "tuning profile to be activated") parser.add_argument('--version', "-v", action = "version", version = "%%(prog)s %s.%s.%s" % (ver.TUNED_VERSION_MAJOR, ver.TUNED_VERSION_MINOR, ver.TUNED_VERSION_PATCH)) args = parser.parse_args(sys.argv[1:]) if os.geteuid() != 0: error("Superuser permissions are required to run the daemon.") sys.exit(1) config = GlobalConfig() log = tuned.logs.get() if args.debug: log.setLevel("DEBUG") try: maxBytes = config.get_size("log_file_max_size", consts.LOG_FILE_MAXBYTES) backupCount = config.get("log_file_count", consts.LOG_FILE_COUNT) if args.daemon: if args.log is None: args.log = consts.LOG_FILE log.switch_to_file(args.log, maxBytes, backupCount) else: if args.log is not None: log.switch_to_file(args.log, maxBytes, backupCount) app = tuned.daemon.Application(args.profile, config) # no daemon mode doesn't need DBus if not config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): args.no_dbus = True if not args.no_dbus: app.attach_to_dbus(consts.DBUS_BUS, consts.DBUS_OBJECT, consts.DBUS_INTERFACE) # always write PID file if args.pid is None: args.pid = consts.PID_FILE if args.daemon: app.daemonize(args.pid) else: app.write_pid_file(args.pid) app.run(args.daemon) except tuned.exceptions.TunedException as exception: if (args.debug): traceback.print_exc() else: error(str(exception)) sys.exit(1) 07070100000113000081A40000000000000000000000016391BC3A000001B7000000000000000000000000000000000000002A00000000tuned-2.19.0.29+git.b894a3e/tuned.service[Unit] Description=Dynamic System Tuning Daemon After=systemd-sysctl.service network.target dbus.service polkit.service Requires=dbus.service Conflicts=cpupower.service auto-cpufreq.service tlp.service power-profiles-daemon.service Documentation=man:tuned(8) man:tuned.conf(5) man:tuned-adm(8) [Service] Type=dbus PIDFile=/run/tuned/tuned.pid BusName=com.redhat.tuned ExecStart=/usr/sbin/tuned -l -P [Install] WantedBy=multi-user.target 07070100000114000081A40000000000000000000000016391BC3A0000B977000000000000000000000000000000000000002700000000tuned-2.19.0.29+git.b894a3e/tuned.spec%bcond_with snapshot %if 0%{?fedora} %if 0%{?fedora} > 27 %bcond_without python3 %else %bcond_with python3 %endif %else %if 0%{?rhel} && 0%{?rhel} < 8 %bcond_with python3 %else %bcond_without python3 %endif %endif %if %{with python3} %global _py python3 %global make_python_arg PYTHON=%{__python3} %else %{!?python2_sitelib:%global python2_sitelib %{python_sitelib}} %if 0%{?rhel} && 0%{?rhel} < 8 %global make_python_arg PYTHON=%{__python} %global _py python %else %global make_python_arg PYTHON=%{__python2} %global _py python2 %endif %endif %if %{with snapshot} %if 0%{!?git_short_commit:1} %global git_short_commit %(git rev-parse --short=8 --verify HEAD) %endif %global git_date %(date +'%Y%m%d') %global git_suffix %{git_date}git%{git_short_commit} %endif #%%global prerelease rc #%%global prereleasenum 1 %global prerel1 %{?prerelease:.%{prerelease}%{prereleasenum}} %global prerel2 %{?prerelease:-%{prerelease}.%{prereleasenum}} Summary: A dynamic adaptive system tuning daemon Name: tuned Version: 2.19.0 Release: 1%{?prerel1}%{?with_snapshot:.%{git_suffix}}%{?dist} License: GPLv2+ Source0: https://github.com/redhat-performance/%{name}/archive/v%{version}%{?prerel2}/%{name}-%{version}%{?prerel2}.tar.gz URL: http://www.tuned-project.org/ BuildArch: noarch BuildRequires: systemd, desktop-file-utils %if 0%{?rhel} BuildRequires: asciidoc %else BuildRequires: asciidoctor %endif Requires(post): systemd, virt-what Requires(preun): systemd Requires(postun): systemd BuildRequires: make BuildRequires: %{_py}, %{_py}-devel # BuildRequires for 'make test' # python-mock is needed for python-2.7, but it's not available on RHEL-7, only in the EPEL %if %{without python3} && ( ! 0%{?rhel} || 0%{?rhel} >= 8 || 0%{?epel}) BuildRequires: %{_py}-mock %endif BuildRequires: %{_py}-pyudev Requires: %{_py}-pyudev Requires: %{_py}-linux-procfs, %{_py}-perf %if %{without python3} Requires: %{_py}-schedutils %endif # requires for packages with inconsistent python2/3 names %if %{with python3} # BuildRequires for 'make test' BuildRequires: python3-dbus, python3-gobject-base Requires: python3-dbus, python3-gobject-base %if 0%{?fedora} > 22 || 0%{?rhel} > 7 Recommends: dmidecode %endif %else # BuildRequires for 'make test' BuildRequires: dbus-python, pygobject3-base Requires: dbus-python, pygobject3-base %endif Requires: virt-what, ethtool, gawk Requires: util-linux, dbus, polkit %if 0%{?fedora} > 22 || 0%{?rhel} > 7 Recommends: dmidecode # i686 excluded Recommends: kernel-tools Requires: hdparm Requires: kmod Requires: iproute %endif # syspurpose %if 0%{?rhel} > 8 # not on CentOS %if 0%{!?centos:1} Recommends: subscription-manager %endif %else %if 0%{?rhel} > 7 Requires: python3-syspurpose %endif %endif %description The tuned package contains a daemon that tunes system settings dynamically. It does so by monitoring the usage of several system components periodically. Based on that information components will then be put into lower or higher power saving modes to adapt to the current usage. Currently only ethernet network and ATA harddisk devices are implemented. %if 0%{?rhel} <= 7 && 0%{!?fedora:1} # RHEL <= 7 %global docdir %{_docdir}/%{name}-%{version} %else # RHEL > 7 || fedora %global docdir %{_docdir}/%{name} %endif %package gtk Summary: GTK GUI for tuned Requires: %{name} = %{version}-%{release} Requires: powertop, polkit # requires for packages with inconsistent python2/3 names %if %{with python3} Requires: python3-gobject-base %else Requires: pygobject3-base %endif %description gtk GTK GUI that can control tuned and provides simple profile editor. %package utils Requires: %{name} = %{version}-%{release} Requires: powertop Summary: Various tuned utilities %description utils This package contains utilities that can help you to fine tune and debug your system and manage tuned profiles. %package utils-systemtap Summary: Disk and net statistic monitoring systemtap scripts Requires: %{name} = %{version}-%{release} Requires: systemtap %description utils-systemtap This package contains several systemtap scripts to allow detailed manual monitoring of the system. Instead of the typical IO/sec it collects minimal, maximal and average time between operations to be able to identify applications that behave power inefficient (many small operations instead of fewer large ones). %package profiles-sap Summary: Additional tuned profile(s) targeted to SAP NetWeaver loads Requires: %{name} = %{version} %description profiles-sap Additional tuned profile(s) targeted to SAP NetWeaver loads. %package profiles-mssql Summary: Additional tuned profile(s) for MS SQL Server Requires: %{name} = %{version} %description profiles-mssql Additional tuned profile(s) for MS SQL Server. %package profiles-oracle Summary: Additional tuned profile(s) targeted to Oracle loads Requires: %{name} = %{version} %description profiles-oracle Additional tuned profile(s) targeted to Oracle loads. %package profiles-sap-hana Summary: Additional tuned profile(s) targeted to SAP HANA loads Requires: %{name} = %{version} %description profiles-sap-hana Additional tuned profile(s) targeted to SAP HANA loads. %package profiles-atomic Summary: Additional tuned profile(s) targeted to Atomic Requires: %{name} = %{version} %description profiles-atomic Additional tuned profile(s) targeted to Atomic host and guest. %package profiles-realtime Summary: Additional tuned profile(s) targeted to realtime Requires: %{name} = %{version} Requires: tuna %description profiles-realtime Additional tuned profile(s) targeted to realtime. %package profiles-nfv-guest Summary: Additional tuned profile(s) targeted to Network Function Virtualization (NFV) guest Requires: %{name} = %{version} Requires: %{name}-profiles-realtime = %{version} Requires: tuna %description profiles-nfv-guest Additional tuned profile(s) targeted to Network Function Virtualization (NFV) guest. %package profiles-nfv-host Summary: Additional tuned profile(s) targeted to Network Function Virtualization (NFV) host Requires: %{name} = %{version} Requires: %{name}-profiles-realtime = %{version} Requires: tuna Requires: nmap-ncat %description profiles-nfv-host Additional tuned profile(s) targeted to Network Function Virtualization (NFV) host. # this is kept for backward compatibility, it should be dropped for RHEL-8 %package profiles-nfv Summary: Additional tuned profile(s) targeted to Network Function Virtualization (NFV) Requires: %{name} = %{version} Requires: %{name}-profiles-nfv-guest = %{version} Requires: %{name}-profiles-nfv-host = %{version} %description profiles-nfv Additional tuned profile(s) targeted to Network Function Virtualization (NFV). %package profiles-cpu-partitioning Summary: Additional tuned profile(s) optimized for CPU partitioning Requires: %{name} = %{version} %description profiles-cpu-partitioning Additional tuned profile(s) optimized for CPU partitioning. %package profiles-spectrumscale Summary: Additional tuned profile(s) optimized for IBM Spectrum Scale Requires: %{name} = %{version} %description profiles-spectrumscale Additional tuned profile(s) optimized for IBM Spectrum Scale. %package profiles-compat Summary: Additional tuned profiles mainly for backward compatibility with tuned 1.0 Requires: %{name} = %{version} %description profiles-compat Additional tuned profiles mainly for backward compatibility with tuned 1.0. It can be also used to fine tune your system for specific scenarios. %package profiles-postgresql Summary: Additional tuned profile(s) targeted to PostgreSQL server loads Requires: %{name} = %{version} %description profiles-postgresql Additional tuned profile(s) targeted to PostgreSQL server loads. %package profiles-openshift Summary: Additional TuneD profile(s) optimized for OpenShift Requires: %{name} = %{version} %description profiles-openshift Additional TuneD profile(s) optimized for OpenShift. %prep %autosetup -p1 -n %{name}-%{version}%{?prerel2} %build # Docs cannot be generated on RHEL now due to missing asciidoctor dependency # asciidoc doesn't seem to be compatible %if ! 0%{?rhel} make html %{make_python_arg} %endif %install make install DESTDIR=%{buildroot} DOCDIR=%{docdir} %{make_python_arg} %if 0%{?rhel} sed -i 's/\(dynamic_tuning[ \t]*=[ \t]*\).*/\10/' %{buildroot}%{_sysconfdir}/tuned/tuned-main.conf %endif %if ! 0%{?rhel} # manual make install-html DESTDIR=%{buildroot} DOCDIR=%{docdir} %endif # conditional support for grub2, grub2 is not available on all architectures # and tuned is noarch package, thus the following hack is needed mkdir -p %{buildroot}%{_datadir}/tuned/grub2 mv %{buildroot}%{_sysconfdir}/grub.d/00_tuned %{buildroot}%{_datadir}/tuned/grub2/00_tuned rmdir %{buildroot}%{_sysconfdir}/grub.d # ghost for persistent storage mkdir -p %{buildroot}%{_var}/lib/tuned # ghost for NFV mkdir -p %{buildroot}%{_sysconfdir}/modprobe.d touch %{buildroot}%{_sysconfdir}/modprobe.d/kvm.rt.tuned.conf # validate desktop file desktop-file-validate %{buildroot}%{_datadir}/applications/tuned-gui.desktop # On RHEL-7 EPEL is needed, because there is no python-mock package and # python-2.7 doesn't have mock built-in %if 0%{?rhel} >= 8 || 0%{?epel} || ! 0%{?rhel} %check make test %{make_python_arg} %endif %post %systemd_post tuned.service # convert active_profile from full path to name (if needed) sed -i 's|.*/\([^/]\+\)/[^\.]\+\.conf|\1|' /etc/tuned/active_profile # convert GRUB_CMDLINE_LINUX to GRUB_CMDLINE_LINUX_DEFAULT if [ -r "%{_sysconfdir}/default/grub" ]; then sed -i 's/GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX \\$tuned_params"/GRUB_CMDLINE_LINUX_DEFAULT="$GRUB_CMDLINE_LINUX_DEFAULT \\$tuned_params"/' \ %{_sysconfdir}/default/grub fi %preun %systemd_preun tuned.service if [ "$1" == 0 ]; then # clear persistent storage rm -f %{_var}/lib/tuned/* # clear temporal storage rm -f /run/tuned/* fi %postun %systemd_postun_with_restart tuned.service # conditional support for grub2, grub2 is not available on all architectures # and tuned is noarch package, thus the following hack is needed if [ "$1" == 0 ]; then rm -f %{_sysconfdir}/grub.d/00_tuned || : # unpatch /etc/default/grub if [ -r "%{_sysconfdir}/default/grub" ]; then sed -i '/GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT:+$GRUB_CMDLINE_LINUX_DEFAULT }\\$tuned_params"/d' %{_sysconfdir}/default/grub fi # cleanup for Boot loader specification (BLS) # clear grubenv variables grub2-editenv - unset tuned_params tuned_initrd &>/dev/null || : # unpatch BLS entries MACHINE_ID=`cat /etc/machine-id 2>/dev/null` if [ "$MACHINE_ID" ] then for f in /boot/loader/entries/$MACHINE_ID-*.conf do # Skip non-files and rescue entries if [ ! -f "$f" -o "${f: -12}" == "-rescue.conf" ] then continue fi # Skip boom managed entries if [[ "$f" =~ \w*-[0-9a-f]{7,}-.*-.*.conf ]] then continue fi sed -i '/^\s*options\s\+.*\$tuned_params/ s/\s\+\$tuned_params\b//g' "$f" &>/dev/null || : sed -i '/^\s*initrd\s\+.*\$tuned_initrd/ s/\s\+\$tuned_initrd\b//g' "$f" &>/dev/null || : done fi fi %triggerun -- tuned < 2.0-0 # remove ktune from old tuned, now part of tuned /usr/sbin/service ktune stop &>/dev/null || : /usr/sbin/chkconfig --del ktune &>/dev/null || : %posttrans # conditional support for grub2, grub2 is not available on all architectures # and tuned is noarch package, thus the following hack is needed if [ -d %{_sysconfdir}/grub.d ]; then cp -a %{_datadir}/tuned/grub2/00_tuned %{_sysconfdir}/grub.d/00_tuned selinuxenabled &>/dev/null && \ restorecon %{_sysconfdir}/grub.d/00_tuned &>/dev/null || : fi %files %exclude %{docdir}/README.utils %exclude %{docdir}/README.scomes %exclude %{docdir}/README.NFV %doc %{docdir} %{_datadir}/bash-completion/completions/tuned-adm %if %{with python3} %exclude %{python3_sitelib}/tuned/gtk %{python3_sitelib}/tuned %else %exclude %{python2_sitelib}/tuned/gtk %{python2_sitelib}/tuned %endif %{_sbindir}/tuned %{_sbindir}/tuned-adm %exclude %{_sysconfdir}/tuned/realtime-variables.conf %exclude %{_sysconfdir}/tuned/realtime-virtual-guest-variables.conf %exclude %{_sysconfdir}/tuned/realtime-virtual-host-variables.conf %exclude %{_sysconfdir}/tuned/cpu-partitioning-variables.conf %exclude %{_sysconfdir}/tuned/cpu-partitioning-powersave-variables.conf %exclude %{_prefix}/lib/tuned/default %exclude %{_prefix}/lib/tuned/desktop-powersave %exclude %{_prefix}/lib/tuned/laptop-ac-powersave %exclude %{_prefix}/lib/tuned/server-powersave %exclude %{_prefix}/lib/tuned/laptop-battery-powersave %exclude %{_prefix}/lib/tuned/enterprise-storage %exclude %{_prefix}/lib/tuned/spindown-disk %exclude %{_prefix}/lib/tuned/sap-netweaver %exclude %{_prefix}/lib/tuned/sap-hana %exclude %{_prefix}/lib/tuned/mssql %exclude %{_prefix}/lib/tuned/oracle %exclude %{_prefix}/lib/tuned/atomic-host %exclude %{_prefix}/lib/tuned/atomic-guest %exclude %{_prefix}/lib/tuned/realtime %exclude %{_prefix}/lib/tuned/realtime-virtual-guest %exclude %{_prefix}/lib/tuned/realtime-virtual-host %exclude %{_prefix}/lib/tuned/cpu-partitioning %exclude %{_prefix}/lib/tuned/cpu-partitioning-powersave %exclude %{_prefix}/lib/tuned/spectrumscale-ece %exclude %{_prefix}/lib/tuned/postgresql %exclude %{_prefix}/lib/tuned/openshift %exclude %{_prefix}/lib/tuned/openshift-control-plane %exclude %{_prefix}/lib/tuned/openshift-node %{_prefix}/lib/tuned %dir %{_sysconfdir}/tuned %dir %{_sysconfdir}/tuned/recommend.d %dir %{_libexecdir}/tuned %{_libexecdir}/tuned/defirqaffinity* %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/active_profile %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/profile_mode %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/post_loaded_profile %config(noreplace) %{_sysconfdir}/tuned/tuned-main.conf %config(noreplace) %verify(not size mtime md5) %{_sysconfdir}/tuned/bootcmdline %{_sysconfdir}/dbus-1/system.d/com.redhat.tuned.conf %verify(not size mtime md5) %{_sysconfdir}/modprobe.d/tuned.conf %{_tmpfilesdir}/tuned.conf %{_unitdir}/tuned.service %dir %{_localstatedir}/log/tuned %dir /run/tuned %dir %{_var}/lib/tuned %{_mandir}/man5/tuned* %{_mandir}/man7/tuned-profiles.7* %{_mandir}/man8/tuned* %dir %{_datadir}/tuned %{_datadir}/tuned/grub2 %{_datadir}/polkit-1/actions/com.redhat.tuned.policy %ghost %{_sysconfdir}/modprobe.d/kvm.rt.tuned.conf %{_prefix}/lib/kernel/install.d/92-tuned.install %files gtk %{_sbindir}/tuned-gui %if %{with python3} %{python3_sitelib}/tuned/gtk %else %{python2_sitelib}/tuned/gtk %endif %{_datadir}/tuned/ui %{_datadir}/icons/hicolor/scalable/apps/tuned.svg %{_datadir}/applications/tuned-gui.desktop %files utils %doc COPYING %{_bindir}/powertop2tuned %{_libexecdir}/tuned/pmqos-static* %files utils-systemtap %doc doc/README.utils %doc doc/README.scomes %doc COPYING %{_sbindir}/varnetload %{_sbindir}/netdevstat %{_sbindir}/diskdevstat %{_sbindir}/scomes %{_mandir}/man8/varnetload.* %{_mandir}/man8/netdevstat.* %{_mandir}/man8/diskdevstat.* %{_mandir}/man8/scomes.* %files profiles-sap %{_prefix}/lib/tuned/sap-netweaver %{_mandir}/man7/tuned-profiles-sap.7* %files profiles-sap-hana %{_prefix}/lib/tuned/sap-hana %{_mandir}/man7/tuned-profiles-sap-hana.7* %files profiles-mssql %{_prefix}/lib/tuned/mssql %{_mandir}/man7/tuned-profiles-mssql.7* %files profiles-oracle %{_prefix}/lib/tuned/oracle %{_mandir}/man7/tuned-profiles-oracle.7* %files profiles-atomic %{_prefix}/lib/tuned/atomic-host %{_prefix}/lib/tuned/atomic-guest %{_mandir}/man7/tuned-profiles-atomic.7* %files profiles-realtime %config(noreplace) %{_sysconfdir}/tuned/realtime-variables.conf %{_prefix}/lib/tuned/realtime %{_mandir}/man7/tuned-profiles-realtime.7* %files profiles-nfv-guest %config(noreplace) %{_sysconfdir}/tuned/realtime-virtual-guest-variables.conf %{_prefix}/lib/tuned/realtime-virtual-guest %{_mandir}/man7/tuned-profiles-nfv-guest.7* %files profiles-nfv-host %config(noreplace) %{_sysconfdir}/tuned/realtime-virtual-host-variables.conf %{_prefix}/lib/tuned/realtime-virtual-host %{_mandir}/man7/tuned-profiles-nfv-host.7* %files profiles-nfv %doc %{docdir}/README.NFV %files profiles-cpu-partitioning %config(noreplace) %{_sysconfdir}/tuned/cpu-partitioning-variables.conf %config(noreplace) %{_sysconfdir}/tuned/cpu-partitioning-powersave-variables.conf %{_prefix}/lib/tuned/cpu-partitioning %{_prefix}/lib/tuned/cpu-partitioning-powersave %{_mandir}/man7/tuned-profiles-cpu-partitioning.7* %{_mandir}/man7/tuned-profiles-cpu-partitioning-powersave.7* %files profiles-spectrumscale %{_prefix}/lib/tuned/spectrumscale-ece %{_mandir}/man7/tuned-profiles-spectrumscale-ece.7* %files profiles-compat %{_prefix}/lib/tuned/default %{_prefix}/lib/tuned/desktop-powersave %{_prefix}/lib/tuned/laptop-ac-powersave %{_prefix}/lib/tuned/server-powersave %{_prefix}/lib/tuned/laptop-battery-powersave %{_prefix}/lib/tuned/enterprise-storage %{_prefix}/lib/tuned/spindown-disk %{_mandir}/man7/tuned-profiles-compat.7* %files profiles-postgresql %{_prefix}/lib/tuned/postgresql %{_mandir}/man7/tuned-profiles-postgresql.7* %files profiles-openshift %{_prefix}/lib/tuned/openshift %{_prefix}/lib/tuned/openshift-control-plane %{_prefix}/lib/tuned/openshift-node %{_mandir}/man7/tuned-profiles-openshift.7* %changelog * Fri Aug 19 2022 Jaroslav Škarvada <jskarvad@redhat.com> - 2.19.0-1 - new release - rebased tuned to latest upstream related: rhbz#2057609 * Tue Aug 9 2022 Jaroslav Škarvada <jskarvad@redhat.com> - 2.19.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#2057609 - fixed parsing of inline comments resolves: rhbz#2060138 - added support for quotes in isolated_cores specification resolves: rhbz#1891036 - spec: reduced weak dependencies resolves: rhbz#2093841 - recommend: do not ignore syspurpose_role if there is no syspurpose resolves: rhbz#2030580 - added support for initial autosetup of isolated_cores resolves: rhbz#2093847 * Wed Feb 9 2022 Jaroslav Škarvada <jskarvad@redhat.com> - 2.18.0-1 - new release - rebased tuned to latest upstream related: rhbz#2003833 - tuned-gui: fixed creation of new profile * Wed Feb 2 2022 Jaroslav Škarvada <jskarvad@redhat.com> - 2.18.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#2003833 - profiles: fix improper parsing of include directive resolves: rhbz#2017924 - disk: added support for the nvme resolves: rhbz#1854816 - cpu: extended cstate force_latency syntax to allow skipping zero latency resolves: rhbz#2002744 - net: added support for the txqueuelen resolves: rhbz#2015044 - bootloader: on s390(x) remove TuneD variables from the BLS resolves: rhbz#1978786 - daemon: don't do full rollback on systemd failure resolves: rhbz#2011459 - spec: do not require subscription-manager on CentOS resolves: rhbz#2028865 * Sun Jan 16 2022 Jaroslav Škarvada <jskarvad@redhat.com> - 2.17.0-1 - new release - rebased tuned to latest upstream related: rhbz#2003838 * Sun Jan 2 2022 Jaroslav Škarvada <jskarvad@redhat.com> - 2.17.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#2003838 - cpu-partitioning: fixed no_balance_cores on newer kernels resolves: rhbz#1874596 - scheduler: allow exclude of processes from the specific cgroup(s) resolves: rhbz#1980715 - switched to the configparser from the configobj resolves: rhbz#1936386 - spec: do not require subscription-manager on CentOS resolves: rhbz#2029405 * Wed Jul 21 2021 Jaroslav Škarvada <jskarvad@redhat.com> - 2.16.0-1 - new release - rebased tuned to latest upstream related: rhbz#1936426 * Wed Jul 7 2021 Jaroslav Škarvada <jskarvad@redhat.com> - 2.16.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1936426 - realtime: "isolate_managed_irq=Y" should be mentioned in "/etc/tuned/realtime-virtual-*-variables.conf" resolves: rhbz#1817827 - realtime: changed tuned default to "isolcpus=domain,managed_irq,X-Y" resolves: rhbz#1820626 - applying a profile with multiple inheritance where parents include a common ancestor fails resolves: rhbz#1825882 - failure in moving i40e IRQ threads to housekeeping CPUs from isolated CPUs resolves: rhbz#1933069 - sort network devices before matching by regex resolves: rhbz#1939970 - net: fixed traceback while adjusting the netdev queue count resolves: rhbz#1943291 - net: fixed traceback if the first listed device returns netlink error resolves: rhbz#1944686 - realtime: improve verification resolves: rhbz#1947858 - bootloader: add support for the rpm-ostree resolves: rhbz#1950164 - net: fixed traceback if a device channel contains n/a resolves: rhbz#1974071 - mssql: updated the profile resolves: rhbz#1942733 - realtime: disabled kvm.nx_huge_page kernel module option in realtime-virtual-host profile resolves: rhbz#1976825 - realtime: explicitly set 'irqaffinity=~<isolated_cpu_mask>' in kernel command line resolves: rhbz#1974820 - scheduler: added abstraction for the sched_* and numa_* variables which were previously accessible through the sysctl resolves: rhbz#1952687 - recommend: fixed wrong profile on ppc64le bare metal servers resolves: rhbz#1959889 * Thu Dec 17 2020 Jaroslav Škarvada <jskarvad@redhat.com> - 2.15.0-1 - new release - rebased tuned to latest upstream related: rhbz#1874052 * Tue Dec 1 2020 Jaroslav Škarvada <jskarvad@redhat.com> - 2.15.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1874052 - added plugin service for linux services control resolves: rhbz#1869991 - scheduler: added default_irq_smp_affinity option resolves: rhbz#1896348 - bootloader: skip boom managed BLS snippets resolves: rhbz#1901532 - scheduler: added perf_process_fork option to enable processing of fork resolves: rhbz#1894610 - scheduler: added perf_mmap_pages option to set perf buffer size resolves: rhbz#1890219 - bootloader: fixed cmdline duplication with BLS and grub2-mkconfig resolves: rhbz#1777874 * Mon Jun 15 2020 Jaroslav Škarvada <jskarvad@redhat.com> - 2.14.0-1 - new release - rebased tuned to latest upstream related: rhbz#1792264 * Mon Jun 8 2020 Jaroslav Škarvada <jskarvad@redhat.com> - 2.14.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1792264 - oracle: turned off NUMA balancing resolves: rhbz#1782233 - man: documented the possibility to apply multiple profiles resolves: rhbz#1794337 - cpu-partitioning: disabled kernel.timer_migration resolves: rhbz#1797629 - profiles: new profile optimize-serial-console resolves: rhbz#1840689 - added support for a post-loaded profile resolves: rhbz#1798183 - plugins: new irqbalance plugin resolves: rhbz#1784645 - throughput-performance: added architecture specific tuning for Marvell ThunderX resolves: rhbz#1746961 - throughput-performance: added architecture specific tuning for AMD resolves: rhbz#1746957 - scheduler: added support for cgroups resolves: rhbz#1784648 * Wed Dec 11 2019 Jaroslav Škarvada <jskarvad@redhat.com> - 2.13.0-1 - new release - rebased tuned to latest upstream related: rhbz#1738250 - sap-hana: updated tuning resolves: rhbz#1779821 - latency-performance: updated tuning resolves: rhbz#1779759 - added sst profile resolves: rhbz#1743879 * Sun Dec 1 2019 Jaroslav Škarvada <jskarvad@redhat.com> - 2.13.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1738250 - cpu: fixed checking if EPB is supported resolves: rhbz#1690929 - scheduler: fixed IRQ SMP affinity verification to respect ignore_missing resolves: rhbz#1729936 - realtime: enabled ktimer_lockless_check resolves: rhbz#1734096 - plugins: support cpuinfo_regex and uname_regex matching resolves: rhbz#1748965 - sysctl: made reapply_sysctl ignore configs from /usr resolves: rhbz#1759597 - added support for multiple include directives resolves: rhbz#1760390 - realtime: added nowatchdog kernel command line option resolves: rhbz#1767614 * Thu Jun 27 2019 Jaroslav Škarvada <jskarvad@redhat.com> - 2.12.0-1 - new release - rebased tuned to latest upstream related: rhbz#1685585 * Wed Jun 12 2019 Jaroslav Škarvada <jskarvad@redhat.com> - 2.12.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1685585 - sap-netweaver: changed values of kernel.shmall and kernel.shmmax to RHEL-8 defaults resolves: rhbz#1708418 - sap-netweaver: changed value of kernel.sem to RHEL-8 default resolves: rhbz#1701394 - sap-hana-vmware: dropped profile resolves: rhbz#1715541 - s2kb function: fixed to be compatible with python3 resolves: rhbz#1684122 - do fallback to the powersave governor (balanced and powersave profiles) resolves: rhbz#1679205 - added support for negation of CPU list resolves: rhbz#1676588 - switched from sysctl tool to own implementation resolves: rhbz#1666678 - realtime-virtual-host: added tsc-deadline=on to qemu cmdline resolves: rhbz#1554458 - fixed handling of devices that have been removed and re-attached resolves: rhbz#1677730 * Thu Mar 21 2019 Jaroslav Škarvada <jskarvad@redhat.com> - 2.11.0-1 - new release - rebased tuned to latest upstream related: rhbz#1643654 - used dmidecode only on x86 architectures resolves: rhbz#1688371 - recommend: fixed to work without tuned daemon running resolves: rhbz#1687397 * Sun Mar 10 2019 Jaroslav Škarvada <jskarvad@redhat.com> - 2.11.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1643654 - use online CPUs for cpusets calculations instead of present CPUs resolves: rhbz#1613478 - realtime-virtual-guest: run script.sh related: rhbz#1616043 - make python-dmidecode a weak dependency resolves: rhbz#1565598 - make virtual-host identical to latency-performance resolves: rhbz#1588932 - added support for Boot loader specification (BLS) resolves: rhbz#1576435 - scheduler: keep polling file objects alive long enough resolves: rhbz#1659140 - mssql: updated tuning resolves: rhbz#1660178 - s2kb: fixed to be compatible with python3 resolves: rhbz#1684122 - profiles: fallback to the 'powersave' scaling governor resolves: rhbz#1679205 - disable KSM only once, re-enable it only on full rollback resolves: rhbz#1622239 - functions: reworked setup_kvm_mod_low_latency to count with kernel changes resolves: rhbz#1649408 - updated virtual-host profile resolves: rhbz#1569375 - added log message for unsupported parameters in plugin_net resolves: rhbz#1533852 - added range feature for cpu exclusion resolves: rhbz#1533908 - make a copy of devices when verifying tuning resolves: rhbz#1592743 - fixed disk plugin/plugout problem resolves: rhbz#1595156 - fixed unit configuration reading resolves: rhbz#1613379 - reload profile configuration on SIGHUP resolves: rhbz#1631744 - use built-in functionality to apply system sysctl resolves: rhbz#1663412 * Wed Jul 4 2018 Jaroslav Škarvada <jskarvad@redhat.com> - 2.10.0-1 - new release - rebased tuned to latest upstream related: rhbz#1546598 - IRQ affinity handled by scheduler plugin resolves: rhbz#1590937 * Mon Jun 11 2018 Jaroslav Škarvada <jskarvad@redhat.com> - 2.10.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1546598 - script: show stderr output in the log resolves: rhbz#1536476 - realtime-virtual-host: script.sh: add error checking resolves: rhbz#1461509 - man: improved tuned-profiles-cpu-partitioning.7 resolves: rhbz#1548148 - bootloader: check if grub2_cfg_file_name is None in _remove_grub2_tuning() resolves: rhbz#1571403 - plugin_scheduler: whitelist/blacklist processed also for thread names resolves: rhbz#1512295 - bootloader: patch all GRUB2 config files resolves: rhbz#1556990 - profiles: added mssql profile resolves: rhbz#1442122 - tuned-adm: print log excerpt when changing profile resolves: rhbz#1538745 - cpu-partitioning: use no_balance_cores instead of no_rebalance_cores resolves: rhbz#1550573 - sysctl: support assignment modifiers as other plugins do resolves: rhbz#1564092 - oracle: fixed ip_local_port_range parity warning resolves: rhbz#1527219 - Fix verifying cpumask on systems with more than 32 cores resolves: rhbz#1528368 - oracle: updated the profile to be in sync with KCS 39188 resolves: rhbz#1447323 * Sun Oct 29 2017 Jaroslav Škarvada <jskarvad@redhat.com> - 2.9.0-1 - new release - rebased tuned to latest upstream related: rhbz#1467576 * Fri Oct 20 2017 Jaroslav Škarvada <jskarvad@redhat.com> - 2.9.0-0.2.rc2 - new release - rebased tuned to latest upstream related: rhbz#1467576 - fixed expansion of the variables in the 'devices' section related: rhbz#1490399 - cpu-partitioning: add no_rebalance_cores= option resolves: rhbz#1497182 * Thu Oct 12 2017 Jaroslav Škarvada <jskarvad@redhat.com> - 2.9.0-0.1.rc1 - new release - rebased tuned to latest upstream resolves: rhbz#1467576 - added recommend.d functionality resolves: rhbz#1459146 - recommend: added support for matching of processes resolves: rhbz#1461838 - plugin_video: added support for the 'dpm' power method resolves: rhbz#1417659 - list available profiles on 'tuned-adm profile' resolves: rhbz#988433 - cpu-partitioning: used tuned instead of tuna for cores isolation resolves: rhbz#1442229 - inventory: added workaround for pyudev < 0.18 resolves: rhbz#1251240 - realtime: used skew_tick=1 in kernel cmdline resolves: rhbz#1447938 - realtime-virtual-guest: re-assigned kernel thread priorities resolves: rhbz#1452357 - bootloader: splitted string for removal from cmdline resolves: rhbz#1461279 - network-latency: added skew_tick=1 kernel command line parameter resolves: rhbz#1451073 - bootloader: accepted only certain values for initrd_remove_dir resolves: rhbz#1455161 - increased udev monitor buffer size, made it configurable resolves: rhbz#1442306 - bootloader: don't add nonexistent overlay image to grub.cfg resolves: rhbz#1454340 - plugin_cpu: don't log error in execute() if EPB is not supported resolves: rhbz#1443182 - sap-hana: fixed description of the sap-hana profiles resolves: rhbz#1482005 - plugin_systemd: on full_rollback notify about need of initrd regeneration resolves: rhbz#1469258 - don't log errors about missing files on verify with ignore_missing set resolves: rhbz#1451435 - plugin_scheduler: improved logging resolves: rhbz#1474961 - improved checking if we are rebooting or not resolves: rhbz#1475571 - started dbus exports after a profile is applied resolves: rhbz#1443142 - sap-hana: changed force_latency to 70 resolves: rhbz#1501252 * Fri Apr 7 2017 Jaroslav Škarvada <jskarvad@redhat.com> - 2.8.0-1 - new release - rebase tuned to latest upstream resolves: rhbz#1388454 - cpu-partitioning: enabled timer migration resolves: rhbz#1408308 - cpu-partitioning: disabled kvmclock sync and ple resolves: rhbz#1395855 - spec: muted error if there is no selinux support resolves: rhbz#1404214 - units: implemented instance priority resolves: rhbz#1246172 - bootloader: added support for initrd overlays resolves: rhbz#1414098 - cpu-partitioning: set CPUAffinity early in initrd image resolves: rhbz#1394965 - cpu-partitioning: set workqueue affinity early resolves: rhbz#1395899 - scsi_host: fixed probing of ALPM, missing ALPM logged as info resolves: rhbz#1416712 - added new profile cpu-partitioning resolves: rhbz#1359956 - bootloader: improved inheritance resolves: rhbz#1274464 - units: mplemented udev-based regexp device matching resolves: rhbz#1251240 - units: introduced pre_script, post_script resolves: rhbz#1246176 - realtime-virtual-host: accommodate new ktimersoftd thread resolves: rhbz#1332563 - defirqaffinity: fixed traceback due to syntax error resolves: rhbz#1369791 - variables: support inheritance of variables resolves: rhbz#1433496 - scheduler: added support for cores isolation resolves: rhbz#1403309 - tuned-profiles-nfv splitted to host/guest and dropped unneeded dependency resolves: rhbz#1413111 - desktop: fixed typo in profile summary resolves: rhbz#1421238 - with systemd don't do full rollback on shutdown / reboot resolves: rhbz#1421286 - builtin functions: added virt_check function and support to include resolves: rhbz#1426654 - cpulist_present: explicitly sorted present CPUs resolves: rhbz#1432240 - plugin_scheduler: fixed initialization resolves: rhbz#1433496 - log errors when applying a profile fails resolves: rhbz#1434360 - systemd: added support for older systemd CPUAffinity syntax resolves: rhbz#1441791 - scheduler: added workarounds for low level exceptions from python-linux-procfs resolves: rhbz#1441792 - bootloader: workaround for adding tuned_initrd to new kernels on restart resolves: rhbz#1441797 * Tue Aug 2 2016 Jaroslav Škarvada <jskarvad@redhat.com> - 2.7.1-1 - new-release - gui: fixed traceback caused by DBus paths copy&paste error related: rhbz#1356369 - tuned-adm: fixed traceback of 'tuned-adm list' if daemon is not running resolves: rhbz#1358857 - tuned-adm: fixed profile_info traceback resolves: rhbz#1362481 * Tue Jul 19 2016 Jaroslav Škarvada <jskarvad@redhat.com> - 2.7.0-1 - new-release - gui: fixed save profile resolves: rhbz#1242491 - tuned-adm: added --ignore-missing parameter resolves: rhbz#1243807 - plugin_vm: added transparent_hugepage alias resolves: rhbz#1249610 - plugins: added modules plugin resolves: rhbz#1249618 - plugin_cpu: do not show error if cpupower or x86_energy_perf_policy are missing resolves: rhbz#1254417 - tuned-adm: fixed restart attempt if tuned is not running resolves: rhbz#1258755 - nfv: avoided race condition by using synchronous mode resolves: rhbz#1259039 - realtime: added check for isolcpus sanity resolves: rhbz#1264128 - pm_qos: fixed exception if PM_QoS is not available resolves: rhbz#1296137 - plugin_sysctl: reapply system sysctl after Tuned sysctl are applied resolves: rhbz#1302953 - atomic: increase number of inotify watches resolves: rhbz#1322001 - realtime-virtual-host/guest: added rcu_nocbs kernel boot parameter resolves: rhbz#1334479 - realtime: fixed kernel.sched_rt_runtime_us to be -1 resolves: rhbz#1346715 - tuned-adm: fixed detection of no_daemon mode resolves: rhbz#1351536 - plugin_base: correctly strip assignment modifiers even if not used resolves: rhbz#1353142 - plugin_disk: try to workaround embedded '/' in device names related: rhbz#1353142 - sap-hana: explicitly setting kernel.numa_balancing = 0 for better performance resolves: rhbz#1355768 - switched to polkit authorization resolves: rhbz#1095142 - plugins: added scsi_host plugin resolves: rhbz#1246992 - spec: fixed conditional support for grub2 to work with selinux resolves: rhbz#1351937 - gui: added tuned icon and desktop file resolves: rhbz#1356369 * Tue Jan 5 2016 Jaroslav Škarvada <jskarvad@redhat.com> - 2.6.0-1 - new-release - plugin_cpu: do not show error if cpupower or x86_energy_perf_policy are missing - plugin_sysctl: fixed quoting of sysctl values resolves: rhbz#1254538 - tuned-adm: added log file location hint to verify command output - libexec: fixed listdir and isdir in defirqaffinity.py resolves: rhbz#1252160 - plugin_cpu: save and restore only intel pstate attributes that were changed resolves: rhbz#1252156 - functions: fixed sysfs save to work with options resolves: rhbz#1251507 - plugins: added scsi_host plugin - tuned-adm: fixed restart attempt if tuned is not running - spec: fixed post scriptlet to work without grub resolves: rhbz#1265654 - tuned-profiles-nfv: fix find-lapictscdeadline-optimal.sh for CPUS where ns > 6500 resolves: rhbz#1267284 - functions: fixed restore_logs_syncing to preserve SELinux context on rsyslog.conf resolves: rhbz#1268901 - realtime: set unboud workqueues cpumask resolves: rhbz#1259043 - spec: correctly remove tuned footprint from /etc/default/grub resolves: rhbz#1268845 - gui: fixed creation of new profile resolves: rhbz#1274609 - profiles: removed nohz_full from the realtime profile resolves: rhbz#1274486 - profiles: Added nohz_full and nohz=on to realtime guest/host profiles resolves: rhbz#1274445 - profiles: fixed lapic_timer_adv_ns cache resolves: rhbz#1259452 - plugin_sysctl: pass verification even if the option doesn't exist related: rhbz#1252153 - added support for 'summary' and 'description' of profiles, extended D-Bus API for Cockpit related: rhbz#1228356 * Tue Aug 4 2015 Jaroslav Škarvada <jskarvad@redhat.com> - 2.5.1-1 - new-release related: rhbz#1155052 - plugin_scheduler: work with nohz_full resolves: rhbz#1247184 - fixed realtime-virtual-guest/host profiles packaged twice resolves: rhbz#1249028 - fixed requirements of realtime and nfv profiles - fixed tuned-gui not starting - various other minor fixes * Sun Jul 5 2015 Jaroslav Škarvada <jskarvad@redhat.com> - 2.5.0-1 - new-release resolves: rhbz#1155052 - add support for ethtool -C to tuned network plugin resolves: rhbz#1152539 - add support for ethtool -K to tuned network plugin resolves: rhbz#1152541 - add support for calculation of values for the kernel command line resolves: rhbz#1191595 - no error output if there is no hdparm installed resolves: rhbz#1191775 - do not run hdparm on hotplug events if there is no hdparm tuning resolves: rhbz#1193682 - add oracle tuned profile resolves: rhbz#1196298 - fix bash completions for tuned-adm resolves: rhbz#1207668 - add glob support to tuned sysfs plugin resolves: rhbz#1212831 - add tuned-adm verify subcommand resolves: rhbz#1212836 - do not install tuned kernel command line to rescue kernels resolves: rhbz#1223864 - add variables support resolves: rhbz#1225124 - add built-in support for unit conversion into tuned resolves: rhbz#1225135 - fix vm.max_map_count setting in sap-netweaver profile resolves: rhbz#1228562 - add tuned profile for RHEL-RT resolves: rhbz#1228801 - plugin_scheduler: added support for runtime tuning of processes resolves: rhbz#1148546 - add support for changing elevators on xvd* devices (Amazon EC2) resolves: rhbz#1170152 - add workaround to be run after systemd-sysctl resolves: rhbz#1189263 - do not change settings of transparent hugepages if set in kernel cmdline resolves: rhbz#1189868 - add tuned profiles for RHEL-NFV resolves: rhbz#1228803 - plugin_bootloader: apply $tuned_params to existing kernels resolves: rhbz#1233004 * Thu Oct 16 2014 Jaroslav Škarvada <jskarvad@redhat.com> - 2.4.1-1 - new-release - fixed return code of tuned grub template resolves: rhbz#1151768 - plugin_bootloader: fix for multiple parameters on command line related: rhbz#1148711 - tuned-adm: fixed traceback on "tuned-adm list" resolves: rhbz#1149162 - plugin_bootloader is automatically disabled if grub2 is not found resolves: rhbz#1150047 - plugin_disk: set_spindown and set_APM made independent resolves: rhbz#976725 * Wed Oct 1 2014 Jaroslav Škarvada <jskarvad@redhat.com> - 2.4.0-1 - new-release resolves: rhbz#1093883 - fixed traceback if profile cannot be loaded related: rhbz#953128 - powertop2tuned: fixed traceback if rewriting file instead of dir resolves: rhbz#963441 - daemon: fixed race condition in start/stop - improved timings, it can be fine tuned in /etc/tuned/tuned-main.conf resolves: rhbz#1028122 - throughput-performance: altered dirty ratios for better performance resolves: rhbz#1043533 - latency-performance: leaving THP on its default resolves: rhbz#1064510 - used throughput-performance profile on server by default resolves: rhbz#1063481 - network-latency: added new profile resolves: rhbz#1052418 - network-throughput: added new profile resolves: rhbz#1052421 - recommend.conf: fixed config file resolves: rhbz#1069123 - spec: added kernel-tools requirement resolves: rhbz#1073008 - systemd: added cpupower.service conflict resolves: rhbz#1073392 - balanced: used medium_power ALPM policy - added support for >, < assignment modifiers in tuned.conf - handled root block devices - balanced: used conservative CPU governor resolves: rhbz#1124125 - plugins: added selinux plugin - plugin_net: added nf_conntrack_hashsize parameter - profiles: added atomic-host profile resolves: rhbz#1091977 - profiles: added atomic-guest profile resolves: rhbz#1091979 - moved profile autodetection from post install script to tuned daemon resolves: rhbz#1144067 - profiles: included sap-hana and sap-hana-vmware profiles - man: structured profiles manual pages according to sub-packages - added missing hdparm dependency resolves: rhbz#1144858 - improved error handling of switch_profile resolves: rhbz#1068699 - tuned-adm: active: detect whether tuned deamon is running related: rhbz#1068699 - removed active_profile from RPM verification resolves: rhbz#1104126 - plugin_disk: readahead value can be now specified in sectors resolves: rhbz#1127127 - plugins: added bootloader plugin resolves: rhbz#1044111 - plugin_disk: added error counter to hdparm calls - plugins: added scheduler plugin resolves: rhbz#1100826 - added tuned-gui * Wed Nov 6 2013 Jaroslav Škarvada <jskarvad@redhat.com> - 2.3.0-1 - new-release resolves: rhbz#1020743 - audio plugin: fixed audio settings in standard profiles resolves: rhbz#1019805 - video plugin: fixed tunings - daemon: fixed crash if preset profile is not available resolves: rhbz#953128 - man: various updates and corrections - functions: fixed usb and bluetooth handling - tuned: switched to lightweighted pygobject3-base - daemon: added global config for dynamic_tuning resolves: rhbz#1006427 - utils: added pmqos-static script for debug purposes resolves: rhbz#1015676 - throughput-performance: various fixes resolves: rhbz#987570 - tuned: added global option update_interval - plugin_cpu: added support for x86_energy_perf_policy resolves: rhbz#1015675 - dbus: fixed KeyboardInterrupt handling - plugin_cpu: added support for intel_pstate resolves: rhbz#996722 - profiles: various fixes resolves: rhbz#922068 - profiles: added desktop profile resolves: rhbz#996723 - tuned-adm: implemented non DBus fallback control - profiles: added sap profile - tuned: lowered CPU usage due to python bug resolves: rhbz#917587 * Tue Mar 19 2013 Jaroslav Škarvada <jskarvad@redhat.com> - 2.2.2-1 - new-release: - cpu plugin: fixed cpupower workaround - cpu plugin: fixed crash if cpupower is installed * Fri Mar 1 2013 Jaroslav Škarvada <jskarvad@redhat.com> - 2.2.1-1 - new release: - audio plugin: fixed error handling in _get_timeout - removed cpupower dependency, added sysfs fallback - powertop2tuned: fixed parser crash on binary garbage resolves: rhbz#914933 - cpu plugin: dropped multicore_powersave as kernel upstream already did - plugins: options manipulated by dynamic tuning are now correctly saved and restored - powertop2tuned: added alias -e for --enable option - powertop2tuned: new option -m, --merge-profile to select profile to merge - prefer transparent_hugepage over redhat_transparent_hugepage - recommend: use recommend.conf not autodetect.conf - tuned.service: switched to dbus type service resolves: rhbz#911445 - tuned: new option --pid, -P to write PID file - tuned, tuned-adm: added new option --version, -v to show version - disk plugin: use APM value 254 for cleanup / APM disable instead of 255 resolves: rhbz#905195 - tuned: new option --log, -l to select log file - powertop2tuned: avoid circular deps in include (one level check only) - powertop2tuned: do not crash if powertop is not installed - net plugin: added support for wake_on_lan static tuning resolves: rhbz#885504 - loader: fixed error handling - spec: used systemd-rpm macros resolves: rhbz#850347 * Mon Jan 28 2013 Jan Vcelak <jvcelak@redhat.com> 2.2.0-1 - new release: - remove nobarrier from virtual-guest (data loss prevention) - devices enumeration via udev, instead of manual retrieval - support for dynamically inserted devices (currently disk plugin) - dropped rfkill plugins (bluetooth and wifi), the code didn't work * Wed Jan 2 2013 Jaroslav Škarvada <jskarvad@redhat.com> - 2.1.2-1 - new release: - systemtap {disk,net}devstat: fix typo in usage - switched to configobj parser - latency-performance: disabled THP - fixed fd leaks on subprocesses * Thu Dec 06 2012 Jan Vcelak <jvcelak@redhat.com> 2.1.1-1 - fix: powertop2tuned execution - fix: ownership of /etc/tuned * Mon Dec 03 2012 Jan Vcelak <jvcelak@redhat.com> 2.1.0-1 - new release: - daemon: allow running without selected profile - daemon: fix profile merging, allow only safe characters in profile names - daemon: implement missing methods in DBus interface - daemon: implement profile recommendation - daemon: improve daemonization, PID file handling - daemon: improved device matching in profiles, negation possible - daemon: various internal improvements - executables: check for EUID instead of UID - executables: run python with -Es to increase security - plugins: cpu - fix cpupower execution - plugins: disk - fix option setting - plugins: mounts - new, currently supports only barriers control - plugins: sysctl - fix a bug preventing settings application - powertop2tuned: speedup, fix crashes with non-C locales - powertop2tuned: support for powertop 2.2 output - profiles: progress on replacing scripts with plugins - tuned-adm: bash completion - suggest profiles from all supported locations - tuned-adm: complete switch to D-bus - tuned-adm: full control to users with physical access * Mon Oct 08 2012 Jaroslav Škarvada <jskarvad@redhat.com> - 2.0.2-1 - New version - Systemtap scripts moved to utils-systemtap subpackage * Sun Jul 22 2012 Fedora Release Engineering <rel-eng@lists.fedoraproject.org> - 2.0.1-4 - Rebuilt for https://fedoraproject.org/wiki/Fedora_18_Mass_Rebuild * Tue Jun 12 2012 Jaroslav Škarvada <jskarvad@redhat.com> - 2.0.1-3 - another powertop-2.0 compatibility fix Resolves: rhbz#830415 * Tue Jun 12 2012 Jan Kaluza <jkaluza@redhat.com> - 2.0.1-2 - fixed powertop2tuned compatibility with powertop-2.0 * Tue Apr 03 2012 Jaroslav Škarvada <jskarvad@redhat.com> - 2.0.1-1 - new version * Fri Mar 30 2012 Jan Vcelak <jvcelak@redhat.com> 2.0-1 - first stable release 07070100000115000081A40000000000000000000000016391BC3A00000038000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/tuned.tmpfiles# tuned runtime directory d /run/tuned 0755 root root - 07070100000116000081A40000000000000000000000016391BC3A000003B9000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/tuned/__init__.py# # tuned: daemon for monitoring and adaptive tuning of system devices # # Copyright (C) 2008-2013 Red Hat, Inc. # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # __copyright__ = "Copyright 2008-2013 Red Hat, Inc." __license__ = "GPLv2+" __email__ = "power-management@lists.fedoraproject.org" 07070100000117000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002800000000tuned-2.19.0.29+git.b894a3e/tuned/admin07070100000118000081A40000000000000000000000016391BC3A0000004E000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/tuned/admin/__init__.pyfrom .admin import * from .exceptions import * from .dbus_controller import * 07070100000119000081A40000000000000000000000016391BC3A000039B2000000000000000000000000000000000000003100000000tuned-2.19.0.29+git.b894a3e/tuned/admin/admin.py from __future__ import print_function import tuned.admin from tuned.utils.commands import commands from tuned.profiles import Locator as profiles_locator from .exceptions import TunedAdminDBusException from tuned.exceptions import TunedException import tuned.consts as consts from tuned.utils.profile_recommender import ProfileRecommender import os import sys import errno import time import threading import logging class Admin(object): def __init__(self, dbus = True, debug = False, asynco = False, timeout = consts.ADMIN_TIMEOUT, log_level = logging.ERROR): self._dbus = dbus self._debug = debug self._async = asynco self._timeout = timeout self._cmd = commands(debug) self._profiles_locator = profiles_locator(consts.LOAD_DIRECTORIES) self._daemon_action_finished = threading.Event() self._daemon_action_profile = "" self._daemon_action_result = True self._daemon_action_errstr = "" self._controller = None self._log_token = None self._log_level = log_level self._profile_recommender = ProfileRecommender() if self._dbus: self._controller = tuned.admin.DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT, debug) try: self._controller.set_signal_handler(consts.DBUS_SIGNAL_PROFILE_CHANGED, self._signal_profile_changed_cb) except TunedAdminDBusException as e: self._error(e) self._dbus = False def _error(self, message): print(message, file=sys.stderr) def _signal_profile_changed_cb(self, profile_name, result, errstr): # ignore successive signals if the signal is not yet processed if not self._daemon_action_finished.is_set(): self._daemon_action_profile = profile_name self._daemon_action_result = result self._daemon_action_errstr = errstr self._daemon_action_finished.set() def _tuned_is_running(self): try: os.kill(int(self._cmd.read_file(consts.PID_FILE)), 0) except OSError as e: return e.errno == errno.EPERM except (ValueError, IOError) as e: return False return True # run the action specified by the action_name with args def action(self, action_name, *args, **kwargs): if action_name is None or action_name == "": return False action = None action_dbus = None res = False try: action_dbus = getattr(self, "_action_dbus_" + action_name) except AttributeError as e: self._dbus = False try: action = getattr(self, "_action_" + action_name) except AttributeError as e: if not self._dbus: self._error(str(e) + ", action '%s' is not implemented" % action_name) return False if self._dbus: try: self._controller.set_on_exit_action( self._log_capture_finish) self._controller.set_action(action_dbus, *args, **kwargs) res = self._controller.run() except TunedAdminDBusException as e: self._error(e) self._dbus = False if not self._dbus: res = action(*args, **kwargs) return res def _print_profiles(self, profile_names): print("Available profiles:") for profile in profile_names: if profile[1] is not None and profile[1] != "": print(self._cmd.align_str("- %s" % profile[0], 30, "- %s" % profile[1])) else: print("- %s" % profile[0]) def _action_dbus_list_profiles(self): try: profile_names = self._controller.profiles2() except TunedAdminDBusException as e: # fallback to older API profile_names = [(profile, "") for profile in self._controller.profiles()] self._print_profiles(profile_names) self._action_dbus_active() return self._controller.exit(True) def _action_list_profiles(self): self._print_profiles(self._profiles_locator.get_known_names_summary()) self._action_active() return True def _dbus_get_active_profile(self): profile_name = self._controller.active_profile() if profile_name == "": profile_name = None self._controller.exit(True) return profile_name def _get_active_profile(self): profile_name, manual = self._cmd.get_active_profile() return profile_name def _get_profile_mode(self): (profile, manual) = self._cmd.get_active_profile() if manual is None: manual = profile is not None return consts.ACTIVE_PROFILE_MANUAL if manual else consts.ACTIVE_PROFILE_AUTO def _dbus_get_post_loaded_profile(self): profile_name = self._controller.post_loaded_profile() if profile_name == "": profile_name = None return profile_name def _get_post_loaded_profile(self): profile_name = self._cmd.get_post_loaded_profile() return profile_name def _print_profile_info(self, profile, profile_info): if profile_info[0] == True: print("Profile name:") print(profile_info[1]) print() print("Profile summary:") print(profile_info[2]) print() print("Profile description:") print(profile_info[3]) return True else: print("Unable to get information about profile '%s'" % profile) return False def _action_dbus_profile_info(self, profile = ""): if profile == "": profile = self._dbus_get_active_profile() if profile: res = self._print_profile_info(profile, self._controller.profile_info(profile)) else: print("No current active profile.") res = False return self._controller.exit(res) def _action_profile_info(self, profile = ""): if profile == "": try: profile = self._get_active_profile() if profile is None: print("No current active profile.") return False except TunedException as e: self._error(str(e)) return False return self._print_profile_info(profile, self._profiles_locator.get_profile_attrs(profile, [consts.PROFILE_ATTR_SUMMARY, consts.PROFILE_ATTR_DESCRIPTION], ["", ""])) def _print_profile_name(self, profile_name): if profile_name is None: print("No current active profile.") return False else: print("Current active profile: %s" % profile_name) return True def _print_post_loaded_profile(self, profile_name): if profile_name: print("Current post-loaded profile: %s" % profile_name) def _action_dbus_active(self): active_profile = self._dbus_get_active_profile() res = self._print_profile_name(active_profile) if res: post_loaded_profile = self._dbus_get_post_loaded_profile() self._print_post_loaded_profile(post_loaded_profile) return self._controller.exit(res) def _action_active(self): try: profile_name = self._get_active_profile() post_loaded_profile = self._get_post_loaded_profile() # The result of the DBus call active_profile includes # the post-loaded profile, so add it here as well if post_loaded_profile: if profile_name: profile_name += " " else: profile_name = "" profile_name += post_loaded_profile except TunedException as e: self._error(str(e)) return False if profile_name is not None and not self._tuned_is_running(): print("It seems that tuned daemon is not running, preset profile is not activated.") print("Preset profile: %s" % profile_name) if post_loaded_profile: print("Preset post-loaded profile: %s" % post_loaded_profile) return True res = self._print_profile_name(profile_name) self._print_post_loaded_profile(post_loaded_profile) return res def _print_profile_mode(self, mode): print("Profile selection mode: " + mode) def _action_dbus_profile_mode(self): mode, error = self._controller.profile_mode() self._print_profile_mode(mode) if error != "": self._error(error) return self._controller.exit(False) return self._controller.exit(True) def _action_profile_mode(self): try: mode = self._get_profile_mode() self._print_profile_mode(mode) return True except TunedException as e: self._error(str(e)) return False def _profile_print_status(self, ret, msg): if ret: if not self._controller.is_running() and not self._controller.start(): self._error("Cannot enable the tuning.") ret = False else: self._error(msg) return ret def _action_dbus_wait_profile(self, profile_name): if time.time() >= self._timestamp + self._timeout: print("Operation timed out after waiting %d seconds(s), you may try to increase timeout by using --timeout command line option or using --async." % self._timeout) return self._controller.exit(False) if self._daemon_action_finished.isSet(): if self._daemon_action_profile == profile_name: if not self._daemon_action_result: print("Error changing profile: %s" % self._daemon_action_errstr) return self._controller.exit(False) return self._controller.exit(True) return False def _log_capture_finish(self): if self._log_token is None or self._log_token == "": return try: log_msgs = self._controller.log_capture_finish( self._log_token) self._log_token = None print(log_msgs, end = "", file = sys.stderr) sys.stderr.flush() except TunedAdminDBusException as e: self._error("Error: Failed to stop log capture. Restart the TuneD daemon to prevent a memory leak.") def _action_dbus_profile(self, profiles): if len(profiles) == 0: return self._action_dbus_list() profile_name = " ".join(profiles) if profile_name == "": return self._controller.exit(False) self._daemon_action_finished.clear() if not self._async and self._log_level is not None: # 25 seconds default DBus timeout + 5 secs safety margin timeout = self._timeout + 25 + 5 self._log_token = self._controller.log_capture_start( self._log_level, timeout) (ret, msg) = self._controller.switch_profile(profile_name) if self._async or not ret: return self._controller.exit(self._profile_print_status(ret, msg)) else: self._timestamp = time.time() self._controller.set_action(self._action_dbus_wait_profile, profile_name) return self._profile_print_status(ret, msg) def _restart_tuned(self): print("Trying to (re)start tuned...") (ret, msg) = self._cmd.execute(["service", "tuned", "restart"]) if ret == 0: print("TuneD (re)started, changes applied.") else: print("TuneD (re)start failed, you need to (re)start TuneD by hand for changes to apply.") def _set_profile(self, profile_name, manual): if profile_name in self._profiles_locator.get_known_names(): try: self._cmd.save_active_profile(profile_name, manual) self._restart_tuned() return True except TunedException as e: self._error(str(e)) self._error("Unable to switch profile.") return False else: self._error("Requested profile '%s' doesn't exist." % profile_name) return False def _action_profile(self, profiles): if len(profiles) == 0: return self._action_list_profiles() profile_name = " ".join(profiles) if profile_name == "": return False return self._set_profile(profile_name, True) def _action_dbus_auto_profile(self): profile_name = self._controller.recommend_profile() self._daemon_action_finished.clear() if not self._async and self._log_level is not None: # 25 seconds default DBus timeout + 5 secs safety margin timeout = self._timeout + 25 + 5 self._log_token = self._controller.log_capture_start( self._log_level, timeout) (ret, msg) = self._controller.auto_profile() if self._async or not ret: return self._controller.exit(self._profile_print_status(ret, msg)) else: self._timestamp = time.time() self._controller.set_action(self._action_dbus_wait_profile, profile_name) return self._profile_print_status(ret, msg) def _action_auto_profile(self): profile_name = self._profile_recommender.recommend() return self._set_profile(profile_name, False) def _action_dbus_recommend_profile(self): print(self._controller.recommend_profile()) return self._controller.exit(True) def _action_recommend_profile(self): print(self._profile_recommender.recommend()) return True def _action_dbus_verify_profile(self, ignore_missing): if ignore_missing: ret = self._controller.verify_profile_ignore_missing() else: ret = self._controller.verify_profile() if ret: print("Verfication succeeded, current system settings match the preset profile.") else: print("Verification failed, current system settings differ from the preset profile.") print("You can mostly fix this by restarting the TuneD daemon, e.g.:") print(" systemctl restart tuned") print("or") print(" service tuned restart") print("Sometimes (if some plugins like bootloader are used) a reboot may be required.") print("See TuneD log file ('%s') for details." % consts.LOG_FILE) return self._controller.exit(ret) def _action_verify_profile(self, ignore_missing): print("Not supported in no_daemon mode.") return False def _action_dbus_off(self): # 25 seconds default DBus timeout + 5 secs safety margin timeout = 25 + 5 self._log_token = self._controller.log_capture_start( self._log_level, timeout) ret = self._controller.off() if not ret: self._error("Cannot disable active profile.") return self._controller.exit(ret) def _action_off(self): print("Not supported in no_daemon mode.") return False def _action_dbus_list(self, list_choice="profiles", verbose=False): """Print accessible profiles or plugins got from TuneD dbus api Keyword arguments: list_choice -- argument from command line deciding what will be listed verbose -- if True then list plugin's config options and their hints if possible. Functional only with plugin listing, with profiles this argument is omitted """ if list_choice == "profiles": return self._action_dbus_list_profiles() elif list_choice == "plugins": return self._action_dbus_list_plugins(verbose=verbose) def _action_list(self, list_choice="profiles", verbose=False): """Print accessible profiles or plugins with no daemon mode Keyword arguments: list_choice -- argument from command line deciding what will be listed verbose -- Plugins cannot be listed in this mode, so verbose argument is here only because argparse module always supplies verbose option and if verbose was not here it would result in error """ if list_choice == "profiles": return self._action_list_profiles() elif list_choice == "plugins": return self._action_list_plugins(verbose=verbose) def _action_dbus_list_plugins(self, verbose=False): """Print accessible plugins Keyword arguments: verbose -- if is set to True then parameters and hints are printed """ plugins = self._controller.get_plugins() for plugin in plugins.keys(): print(plugin) if not verbose or len(plugins[plugin]) == 0: continue hints = self._controller.get_plugin_hints(plugin) for parameter in plugins[plugin]: print("\t%s" %(parameter)) hint = hints.get(parameter, None) if hint: print("\t\t%s" %(hint)) return self._controller.exit(True) def _action_list_plugins(self, verbose=False): print("Not supported in no_daemon mode.") return False 0707010000011A000081A40000000000000000000000016391BC3A0000121F000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/admin/dbus_controller.pyimport dbus import dbus.exceptions import time from dbus.mainloop.glib import DBusGMainLoop from gi.repository import GLib, GObject from .exceptions import TunedAdminDBusException __all__ = ["DBusController"] class DBusController(object): def __init__(self, bus_name, interface_name, object_name, debug = False): self._bus_name = bus_name self._interface_name = interface_name self._object_name = object_name self._proxy = None self._interface = None self._debug = debug self._main_loop = None self._action = None self._on_exit_action = None self._ret = True self._exit = False self._exception = None def _init_proxy(self): try: if self._proxy is None: DBusGMainLoop(set_as_default=True) self._main_loop = GLib.MainLoop() bus = dbus.SystemBus() self._proxy = bus.get_object(self._bus_name, self._object_name) self._interface = dbus.Interface(self._proxy, dbus_interface = self._interface_name) except dbus.exceptions.DBusException: raise TunedAdminDBusException("Cannot talk to TuneD daemon via DBus. Is TuneD daemon running?") def _idle(self): if self._action is not None: # This may (and very probably will) run in child thread, so catch and pass exceptions to the main thread try: self._action_exit_code = self._action(*self._action_args, **self._action_kwargs) except TunedAdminDBusException as e: self._exception = e self._exit = True if self._exit: if self._on_exit_action is not None: self._on_exit_action(*self._on_exit_action_args, **self._on_exit_action_kwargs) self._main_loop.quit() return False else: time.sleep(1) return True def set_on_exit_action(self, action, *args, **kwargs): self._on_exit_action = action self._on_exit_action_args = args self._on_exit_action_kwargs = kwargs def set_action(self, action, *args, **kwargs): self._action = action self._action_args = args self._action_kwargs = kwargs def run(self): self._exception = None GLib.idle_add(self._idle) self._main_loop.run() # Pass exception happened in child thread to the caller if self._exception is not None: raise self._exception return self._ret def _call(self, method_name, *args, **kwargs): self._init_proxy() try: method = self._interface.get_dbus_method(method_name) return method(*args, timeout=40) except dbus.exceptions.DBusException as dbus_exception: err_str = "DBus call to TuneD daemon failed" if self._debug: err_str += " (%s)" % str(dbus_exception) raise TunedAdminDBusException(err_str) def set_signal_handler(self, signal, cb): self._init_proxy() self._proxy.connect_to_signal(signal, cb) def is_running(self): return self._call("is_running") def start(self): return self._call("start") def stop(self): return self._call("stop") def profiles(self): return self._call("profiles") def profiles2(self): return self._call("profiles2") def profile_info(self, profile_name): return self._call("profile_info", profile_name) def log_capture_start(self, log_level, timeout): return self._call("log_capture_start", log_level, timeout) def log_capture_finish(self, token): return self._call("log_capture_finish", token) def active_profile(self): return self._call("active_profile") def profile_mode(self): return self._call("profile_mode") def post_loaded_profile(self): return self._call("post_loaded_profile") def switch_profile(self, new_profile): if new_profile == "": return (False, "No profile specified") return self._call("switch_profile", new_profile) def auto_profile(self): return self._call("auto_profile") def recommend_profile(self): return self._call("recommend_profile") def verify_profile(self): return self._call("verify_profile") def verify_profile_ignore_missing(self): return self._call("verify_profile_ignore_missing") def off(self): return self._call("disable") def get_plugins(self): """Return dict with plugin names and their hints Return: dictionary -- {plugin_name: {parameter_name: default_value}} """ return self._call("get_all_plugins") def get_plugin_documentation(self, plugin_name): """Return docstring of plugin's class""" return self._call("get_plugin_documentation", plugin_name) def get_plugin_hints(self, plugin_name): """Return dictionary with parameters of plugin and their hints Parameters: plugin_name -- name of plugin Return: dictionary -- {parameter_name: hint} """ return self._call("get_plugin_hints", plugin_name) def exit(self, ret): self.set_action(None) self._ret = ret self._exit = True return ret 0707010000011B000081A40000000000000000000000016391BC3A0000005F000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/admin/exceptions.pyimport tuned.exceptions class TunedAdminDBusException(tuned.exceptions.TunedException): pass 0707010000011C000081A40000000000000000000000016391BC3A000018CD000000000000000000000000000000000000002C00000000tuned-2.19.0.29+git.b894a3e/tuned/consts.pyimport logging GLOBAL_CONFIG_FILE = "/etc/tuned/tuned-main.conf" ACTIVE_PROFILE_FILE = "/etc/tuned/active_profile" PROFILE_MODE_FILE = "/etc/tuned/profile_mode" POST_LOADED_PROFILE_FILE = "/etc/tuned/post_loaded_profile" PROFILE_FILE = "tuned.conf" RECOMMEND_CONF_FILE = "/etc/tuned/recommend.conf" DAEMONIZE_PARENT_TIMEOUT = 5 NAMESPACE = "com.redhat.tuned" DBUS_BUS = NAMESPACE DBUS_INTERFACE = "com.redhat.tuned.control" DBUS_OBJECT = "/Tuned" DEFAULT_PROFILE = "balanced" DEFAULT_STORAGE_FILE = "/run/tuned/save.pickle" LOAD_DIRECTORIES = ["/usr/lib/tuned", "/etc/tuned"] PERSISTENT_STORAGE_DIR = "/var/lib/tuned" PLUGIN_MAIN_UNIT_NAME = "main" # Magic section header because ConfigParser does not support "headerless" config MAGIC_HEADER_NAME = "this_is_some_magic_section_header_because_of_compatibility" RECOMMEND_DIRECTORIES = ["/usr/lib/tuned/recommend.d", "/etc/tuned/recommend.d"] TMP_FILE_SUFFIX = ".tmp" # max. number of consecutive errors to give up ERROR_THRESHOLD = 3 # bootloader plugin configuration BOOT_DIR = "/boot" GRUB2_CFG_FILES = ["/etc/grub2.cfg", "/etc/grub2-efi.cfg"] GRUB2_CFG_DIR = "/etc/grub.d" GRUB2_TUNED_TEMPLATE_NAME = "00_tuned" GRUB2_TUNED_TEMPLATE_PATH = GRUB2_CFG_DIR + "/" + GRUB2_TUNED_TEMPLATE_NAME GRUB2_TEMPLATE_HEADER_BEGIN = "### BEGIN /etc/grub.d/" + GRUB2_TUNED_TEMPLATE_NAME + " ###" GRUB2_TEMPLATE_HEADER_END = "### END /etc/grub.d/" + GRUB2_TUNED_TEMPLATE_NAME + " ###" GRUB2_TUNED_VAR = "tuned_params" GRUB2_TUNED_INITRD_VAR = "tuned_initrd" GRUB2_DEFAULT_ENV_FILE = "/etc/default/grub" INITRD_IMAGE_DIR = "/boot" BOOT_CMDLINE_TUNED_VAR = "TUNED_BOOT_CMDLINE" BOOT_CMDLINE_INITRD_ADD_VAR = "TUNED_BOOT_INITRD_ADD" BOOT_CMDLINE_KARGS_DELETED_VAR = "TUNED_BOOT_KARGS_DELETED" BOOT_CMDLINE_FILE = "/etc/tuned/bootcmdline" PETITBOOT_DETECT_DIR = "/sys/firmware/opal" MACHINE_ID_FILE = "/etc/machine-id" KERNEL_UPDATE_HOOK_FILE = "/usr/lib/kernel/install.d/92-tuned.install" BLS_ENTRIES_PATH = "/boot/loader/entries" # scheduler plugin configuration # how many times retry to move tasks to parent cgroup on cgroup cleanup CGROUP_CLEANUP_TASKS_RETRY = 10 PROCFS_MOUNT_POINT = "/proc" DEF_CGROUP_MOUNT_POINT = "/sys/fs/cgroup/cpuset" DEF_CGROUP_MODE = 0o770 # service plugin configuration SERVICE_SYSTEMD_CFG_PATH = "/etc/systemd/system/%s.service.d" DEF_SERVICE_CFG_DIR_MODE = 0o755 # modules plugin configuration MODULES_FILE = "/etc/modprobe.d/tuned.conf" # systemd plugin configuration SYSTEMD_SYSTEM_CONF_FILE = "/etc/systemd/system.conf" SYSTEMD_CPUAFFINITY_VAR = "CPUAffinity" # irqbalance plugin configuration IRQBALANCE_SYSCONFIG_FILE = "/etc/sysconfig/irqbalance" # built-in functions configuration SYSFS_CPUS_PATH = "/sys/devices/system/cpu" # number of backups LOG_FILE_COUNT = 2 LOG_FILE_MAXBYTES = 100*1000 LOG_FILE = "/var/log/tuned/tuned.log" PID_FILE = "/run/tuned/tuned.pid" SYSTEM_RELEASE_FILE = "/etc/system-release-cpe" # prefix for functions plugins FUNCTION_PREFIX = "function_" # prefix for exported environment variables when calling scripts ENV_PREFIX = "TUNED_" # tuned-gui PREFIX_PROFILE_FACTORY = "System" PREFIX_PROFILE_USER = "User" # After adding new option to tuned-main.conf add here its name with CFG_ prefix # and eventually default value with CFG_DEF_ prefix (default is None) # and function for check with CFG_FUNC_ prefix # (see configobj for methods, default is get for string) CFG_DAEMON = "daemon" CFG_DYNAMIC_TUNING = "dynamic_tuning" CFG_SLEEP_INTERVAL = "sleep_interval" CFG_UPDATE_INTERVAL = "update_interval" CFG_RECOMMEND_COMMAND = "recommend_command" CFG_REAPPLY_SYSCTL = "reapply_sysctl" CFG_DEFAULT_INSTANCE_PRIORITY = "default_instance_priority" CFG_UDEV_BUFFER_SIZE = "udev_buffer_size" CFG_LOG_FILE_COUNT = "log_file_count" CFG_LOG_FILE_MAX_SIZE = "log_file_max_size" CFG_UNAME_STRING = "uname_string" CFG_CPUINFO_STRING = "cpuinfo_string" # no_daemon mode CFG_DEF_DAEMON = True CFG_FUNC_DAEMON = "getboolean" # default configuration CFG_DEF_DYNAMIC_TUNING = True CFG_FUNC_DYNAMIC_TUNING = "getboolean" # how long to sleep before checking for events (in seconds) CFG_DEF_SLEEP_INTERVAL = 1 CFG_FUNC_SLEEP_INTERVAL = "getint" # update interval for dynamic tuning (in seconds) CFG_DEF_UPDATE_INTERVAL = 10 CFG_FUNC_UPDATE_INTERVAL = "getint" # recommend command availability CFG_DEF_RECOMMEND_COMMAND = True CFG_FUNC_RECOMMEND_COMMAND = "getboolean" # reapply system sysctl CFG_DEF_REAPPLY_SYSCTL = True CFG_FUNC_REAPPLY_SYSCTL = "getboolean" # default instance priority CFG_DEF_DEFAULT_INSTANCE_PRIORITY = 0 CFG_FUNC_DEFAULT_INSTANCE_PRIORITY = "getint" # default pyudev.Monitor buffer size CFG_DEF_UDEV_BUFFER_SIZE = 1024 * 1024 # default log file count CFG_DEF_LOG_FILE_COUNT = 2 CFG_FUNC_LOG_FILE_COUNT = "getint" # default log file max size CFG_DEF_LOG_FILE_MAX_SIZE = 1024 * 1024 PATH_CPU_DMA_LATENCY = "/dev/cpu_dma_latency" # profile attributes which can be specified in the main section PROFILE_ATTR_SUMMARY = "summary" PROFILE_ATTR_DESCRIPTION = "description" DBUS_SIGNAL_PROFILE_CHANGED = "profile_changed" STR_HINT_REBOOT = "you need to reboot for changes to take effect" STR_VERIFY_PROFILE_DEVICE_VALUE_OK = "verify: passed: device %s: '%s' = '%s'" STR_VERIFY_PROFILE_VALUE_OK = "verify: passed: '%s' = '%s'" STR_VERIFY_PROFILE_OK = "verify: passed: '%s'" STR_VERIFY_PROFILE_DEVICE_VALUE_MISSING = "verify: skipped, missing: device %s: '%s'" STR_VERIFY_PROFILE_VALUE_MISSING = "verify: skipped, missing: '%s'" STR_VERIFY_PROFILE_DEVICE_VALUE_FAIL = "verify: failed: device %s: '%s' = '%s', expected '%s'" STR_VERIFY_PROFILE_VALUE_FAIL = "verify: failed: '%s' = '%s', expected '%s'" STR_VERIFY_PROFILE_CMDLINE_FAIL = "verify: failed: cmdline arg '%s', expected '%s'" STR_VERIFY_PROFILE_CMDLINE_FAIL_MISSING = "verify: failed: cmdline arg '%s' is missing, expected '%s'" STR_VERIFY_PROFILE_FAIL = "verify: failed: '%s'" # timout for tuned-adm operations in seconds ADMIN_TIMEOUT = 600 # Strings for /etc/tuned/profile_mode specifying if the active profile # was set automatically or manually ACTIVE_PROFILE_AUTO = "auto" ACTIVE_PROFILE_MANUAL = "manual" LOG_LEVEL_CONSOLE = 60 LOG_LEVEL_CONSOLE_NAME = "CONSOLE" CAPTURE_LOG_LEVEL = "console" CAPTURE_LOG_LEVELS = { "debug": logging.DEBUG, "info": logging.INFO, "warn": logging.WARN, "error": logging.ERROR, "console": LOG_LEVEL_CONSOLE, "none": None, } 0707010000011D000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002900000000tuned-2.19.0.29+git.b894a3e/tuned/daemon0707010000011E000081A40000000000000000000000016391BC3A0000004B000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/daemon/__init__.pyfrom .application import * from .controller import * from .daemon import * 0707010000011F000081A40000000000000000000000016391BC3A00001BC8000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/daemon/application.pyfrom tuned import storage, units, monitors, plugins, profiles, exports, hardware from tuned.exceptions import TunedException import tuned.logs import tuned.version from . import controller from . import daemon import signal import os import sys import select import struct import tuned.consts as consts from tuned.utils.global_config import GlobalConfig log = tuned.logs.get() __all__ = ["Application"] class Application(object): def __init__(self, profile_name = None, config = None): # os.uname()[2] is for the python-2.7 compatibility, it's the release string # like e.g. '5.15.13-100.fc34.x86_64' log.info("TuneD: %s, kernel: %s" % (tuned.version.TUNED_VERSION_STR, os.uname()[2])) self._dbus_exporter = None storage_provider = storage.PickleProvider() storage_factory = storage.Factory(storage_provider) self.config = GlobalConfig() if config is None else config if self.config.get_bool(consts.CFG_DYNAMIC_TUNING): log.info("dynamic tuning is enabled (can be overridden in plugins)") else: log.info("dynamic tuning is globally disabled") monitors_repository = monitors.Repository() udev_buffer_size = self.config.get_size("udev_buffer_size", consts.CFG_DEF_UDEV_BUFFER_SIZE) hardware_inventory = hardware.Inventory(buffer_size=udev_buffer_size) device_matcher = hardware.DeviceMatcher() device_matcher_udev = hardware.DeviceMatcherUdev() plugin_instance_factory = plugins.instance.Factory() self.variables = profiles.variables.Variables() plugins_repository = plugins.Repository(monitors_repository, storage_factory, hardware_inventory,\ device_matcher, device_matcher_udev, plugin_instance_factory, self.config, self.variables) def_instance_priority = int(self.config.get(consts.CFG_DEFAULT_INSTANCE_PRIORITY, consts.CFG_DEF_DEFAULT_INSTANCE_PRIORITY)) unit_manager = units.Manager( plugins_repository, monitors_repository, def_instance_priority, hardware_inventory, self.config) profile_factory = profiles.Factory() profile_merger = profiles.Merger() profile_locator = profiles.Locator(consts.LOAD_DIRECTORIES) profile_loader = profiles.Loader(profile_locator, profile_factory, profile_merger, self.config, self.variables) self._daemon = daemon.Daemon(unit_manager, profile_loader, profile_name, self.config, self) self._controller = controller.Controller(self._daemon, self.config) self._init_signals() self._pid_file = None def _handle_signal(self, signal_number, handler): def handler_wrapper(_signal_number, _frame): if signal_number == _signal_number: handler() signal.signal(signal_number, handler_wrapper) def _init_signals(self): self._handle_signal(signal.SIGHUP, self._controller.reload) self._handle_signal(signal.SIGINT, self._controller.terminate) self._handle_signal(signal.SIGTERM, self._controller.terminate) def attach_to_dbus(self, bus_name, object_name, interface_name): if self._dbus_exporter is not None: raise TunedException("DBus interface is already initialized.") self._dbus_exporter = exports.dbus.DBusExporter(bus_name, interface_name, object_name) exports.register_exporter(self._dbus_exporter) exports.register_object(self._controller) def _daemonize_parent(self, parent_in_fd, child_out_fd): """ Wait till the child signalizes that the initialization is complete by writing some uninteresting data into the pipe. """ os.close(child_out_fd) (read_ready, drop, drop) = select.select([parent_in_fd], [], [], consts.DAEMONIZE_PARENT_TIMEOUT) if len(read_ready) != 1: os.close(parent_in_fd) raise TunedException("Cannot daemonize, timeout when waiting for the child process.") response = os.read(parent_in_fd, 8) os.close(parent_in_fd) if len(response) == 0: raise TunedException("Cannot daemonize, no response from child process received.") try: val = struct.unpack("?", response)[0] except struct.error: raise TunedException("Cannot daemonize, invalid response from child process received.") if val != True: raise TunedException("Cannot daemonize, child process reports failure.") def write_pid_file(self, pid_file = consts.PID_FILE): self._pid_file = pid_file self._delete_pid_file() try: dir_name = os.path.dirname(self._pid_file) if not os.path.exists(dir_name): os.makedirs(dir_name) with os.fdopen(os.open(self._pid_file, os.O_CREAT|os.O_TRUNC|os.O_WRONLY , 0o644), "w") as f: f.write("%d" % os.getpid()) except (OSError,IOError) as error: log.critical("cannot write the PID to %s: %s" % (self._pid_file, str(error))) def _delete_pid_file(self): if os.path.exists(self._pid_file): try: os.unlink(self._pid_file) except OSError as error: log.warning("cannot remove existing PID file %s, %s" % (self._pid_file, str(error))) def _daemonize_child(self, pid_file, parent_in_fd, child_out_fd): """ Finishes daemonizing process, writes a PID file and signalizes to the parent that the initialization is complete. """ os.close(parent_in_fd) os.chdir("/") os.setsid() os.umask(0) try: pid = os.fork() if pid > 0: sys.exit(0) except OSError as error: log.critical("cannot daemonize, fork() error: %s" % str(error)) val = struct.pack("?", False) os.write(child_out_fd, val) os.close(child_out_fd) raise TunedException("Cannot daemonize, second fork() failed.") fd = open("/dev/null", "w+") os.dup2(fd.fileno(), sys.stdin.fileno()) os.dup2(fd.fileno(), sys.stdout.fileno()) os.dup2(fd.fileno(), sys.stderr.fileno()) self.write_pid_file(pid_file) log.debug("successfully daemonized") val = struct.pack("?", True) os.write(child_out_fd, val) os.close(child_out_fd) def daemonize(self, pid_file = consts.PID_FILE): """ Daemonizes the application. In case of failure, TunedException is raised in the parent process. If the operation is successfull, the main process is terminated and only child process returns from this method. """ parent_child_fds = os.pipe() try: child_pid = os.fork() except OSError as error: os.close(parent_child_fds[0]) os.close(parent_child_fds[1]) raise TunedException("Cannot daemonize, fork() failed.") try: if child_pid > 0: self._daemonize_parent(*parent_child_fds) sys.exit(0) else: self._daemonize_child(pid_file, *parent_child_fds) except: # pass exceptions only into parent process if child_pid > 0: raise else: sys.exit(1) @property def daemon(self): return self._daemon @property def controller(self): return self._controller def run(self, daemon): # override global config if ran from command line with daemon option (-d) if daemon: self.config.set(consts.CFG_DAEMON, True) if not self.config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): log.warn("Using one shot no deamon mode, most of the functionality will be not available, it can be changed in global config") result = self._controller.run() if self.config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): exports.stop() if self._pid_file is not None: self._delete_pid_file() return result 07070100000120000081A40000000000000000000000016391BC3A0000223F000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/tuned/daemon/controller.pyfrom tuned import exports import tuned.logs import tuned.exceptions from tuned.exceptions import TunedException import threading import tuned.consts as consts from tuned.utils.commands import commands __all__ = ["Controller"] log = tuned.logs.get() class TimerStore(object): def __init__(self): self._timers = dict() self._timers_lock = threading.Lock() def store_timer(self, token, timer): with self._timers_lock: self._timers[token] = timer def drop_timer(self, token): with self._timers_lock: try: timer = self._timers[token] timer.cancel() del self._timers[token] except: pass def cancel_all(self): with self._timers_lock: for timer in self._timers.values(): timer.cancel() self._timers.clear() class Controller(tuned.exports.interfaces.ExportableInterface): """ Controller's purpose is to keep the program running, start/stop the tuning, and export the controller interface (currently only over D-Bus). """ def __init__(self, daemon, global_config): super(Controller, self).__init__() self._daemon = daemon self._global_config = global_config self._terminate = threading.Event() self._cmd = commands() self._timer_store = TimerStore() def run(self): """ Controller main loop. The call is blocking. """ log.info("starting controller") res = self.start() daemon = self._global_config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) if not res and daemon: exports.start() if daemon: self._terminate.clear() # we have to pass some timeout, otherwise signals will not work while not self._cmd.wait(self._terminate, 10): pass log.info("terminating controller") self.stop() def terminate(self): self._terminate.set() @exports.signal("sbs") def profile_changed(self, profile_name, result, errstr): pass # exports decorator checks the authorization (currently through polkit), caller is None if # no authorization was performed (i.e. the call should process as authorized), string # identifying caller (with DBus it's the caller bus name) if authorized and empty # string if not authorized, caller must be the last argument def _log_capture_abort(self, token): tuned.logs.log_capture_finish(token) self._timer_store.drop_timer(token) @exports.export("ii", "s") def log_capture_start(self, log_level, timeout, caller = None): if caller == "": return "" token = tuned.logs.log_capture_start(log_level) if token is None: return "" if timeout > 0: timer = threading.Timer(timeout, self._log_capture_abort, args = [token]) self._timer_store.store_timer(token, timer) timer.start() return "" if token is None else token @exports.export("s", "s") def log_capture_finish(self, token, caller = None): if caller == "": return "" res = tuned.logs.log_capture_finish(token) self._timer_store.drop_timer(token) return "" if res is None else res @exports.export("", "b") def start(self, caller = None): if caller == "": return False if self._global_config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON): if self._daemon.is_running(): return True elif not self._daemon.is_enabled(): return False return self._daemon.start() @exports.export("", "b") def stop(self, caller = None): if caller == "": return False if not self._daemon.is_running(): res = True else: res = self._daemon.stop() self._timer_store.cancel_all() return res @exports.export("", "b") def reload(self, caller = None): if caller == "": return False if self._daemon.is_running(): stop_ok = self.stop() if not stop_ok: return False try: self._daemon.reload_profile_config() except TunedException as e: log.error("Failed to reload TuneD: %s" % e) return False return self.start() def _switch_profile(self, profile_name, manual): was_running = self._daemon.is_running() msg = "OK" success = True reapply = False try: if was_running: self._daemon.stop(profile_switch = True) self._daemon.set_profile(profile_name, manual) except tuned.exceptions.TunedException as e: success = False msg = str(e) if was_running and self._daemon.profile.name == profile_name: log.error("Failed to reapply profile '%s'. Did it change on disk and break?" % profile_name) reapply = True else: log.error("Failed to apply profile '%s'" % profile_name) finally: if was_running: if reapply: log.warn("Applying previously applied (possibly out-dated) profile '%s'." % profile_name) elif not success: log.info("Applying previously applied profile.") self._daemon.start() return (success, msg) @exports.export("s", "(bs)") def switch_profile(self, profile_name, caller = None): if caller == "": return (False, "Unauthorized") return self._switch_profile(profile_name, True) @exports.export("", "(bs)") def auto_profile(self, caller = None): if caller == "": return (False, "Unauthorized") profile_name = self.recommend_profile() return self._switch_profile(profile_name, False) @exports.export("", "s") def active_profile(self, caller = None): if caller == "": return "" if self._daemon.profile is not None: return self._daemon.profile.name else: return "" @exports.export("", "(ss)") def profile_mode(self, caller = None): if caller == "": return "unknown", "Unauthorized" manual = self._daemon.manual if manual is None: # This means no profile is applied. Check the preset value. try: profile, manual = self._cmd.get_active_profile() if manual is None: manual = profile is not None except TunedException as e: mode = "unknown" error = str(e) return mode, error mode = consts.ACTIVE_PROFILE_MANUAL if manual else consts.ACTIVE_PROFILE_AUTO return mode, "" @exports.export("", "s") def post_loaded_profile(self, caller = None): if caller == "": return "" return self._daemon.post_loaded_profile or "" @exports.export("", "b") def disable(self, caller = None): if caller == "": return False if self._daemon.is_running(): self._daemon.stop() if self._daemon.is_enabled(): self._daemon.set_all_profiles(None, True, None, save_instantly=True) return True @exports.export("", "b") def is_running(self, caller = None): if caller == "": return False return self._daemon.is_running() @exports.export("", "as") def profiles(self, caller = None): if caller == "": return [] return self._daemon.profile_loader.profile_locator.get_known_names() @exports.export("", "a(ss)") def profiles2(self, caller = None): if caller == "": return [] return self._daemon.profile_loader.profile_locator.get_known_names_summary() @exports.export("s", "(bsss)") def profile_info(self, profile_name, caller = None): if caller == "": return tuple(False, "", "", "") if profile_name is None or profile_name == "": profile_name = self.active_profile() return tuple(self._daemon.profile_loader.profile_locator.get_profile_attrs(profile_name, [consts.PROFILE_ATTR_SUMMARY, consts.PROFILE_ATTR_DESCRIPTION], [""])) @exports.export("", "s") def recommend_profile(self, caller = None): if caller == "": return "" return self._daemon.profile_recommender.recommend() @exports.export("", "b") def verify_profile(self, caller = None): if caller == "": return False return self._daemon.verify_profile(ignore_missing = False) @exports.export("", "b") def verify_profile_ignore_missing(self, caller = None): if caller == "": return False return self._daemon.verify_profile(ignore_missing = True) @exports.export("", "a{sa{ss}}") def get_all_plugins(self, caller = None): """Return dictionary with accesible plugins Return: dictionary -- {plugin_name: {parameter_name: default_value}} """ if caller == "": return False plugins = {} for plugin_class in self._daemon.get_all_plugins(): plugin_name = plugin_class.__module__.split(".")[-1].split("_", 1)[1] conf_options = plugin_class._get_config_options() plugins[plugin_name] = {} for key, val in conf_options.items(): plugins[plugin_name][key] = str(val) return plugins @exports.export("s","s") def get_plugin_documentation(self, plugin_name, caller = None): """Return docstring of plugin's class""" if caller == "": return False return self._daemon.get_plugin_documentation(str(plugin_name)) @exports.export("s","a{ss}") def get_plugin_hints(self, plugin_name, caller = None): """Return dictionary with plugin's parameters and their hints Parameters: plugin_name -- name of plugin Return: dictionary -- {parameter_name: hint} """ if caller == "": return False return self._daemon.get_plugin_hints(str(plugin_name)) 07070100000121000081A40000000000000000000000016391BC3A000030A8000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tuned/daemon/daemon.pyimport os import errno import threading import tuned.logs from tuned.exceptions import TunedException from tuned.profiles.exceptions import InvalidProfileException import tuned.consts as consts from tuned.utils.commands import commands from tuned import exports from tuned.utils.profile_recommender import ProfileRecommender import re log = tuned.logs.get() class Daemon(object): def __init__(self, unit_manager, profile_loader, profile_names=None, config=None, application=None): log.debug("initializing daemon") self._daemon = consts.CFG_DEF_DAEMON self._sleep_interval = int(consts.CFG_DEF_SLEEP_INTERVAL) self._update_interval = int(consts.CFG_DEF_UPDATE_INTERVAL) self._dynamic_tuning = consts.CFG_DEF_DYNAMIC_TUNING self._recommend_command = True if config is not None: self._daemon = config.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) self._sleep_interval = int(config.get(consts.CFG_SLEEP_INTERVAL, consts.CFG_DEF_SLEEP_INTERVAL)) self._update_interval = int(config.get(consts.CFG_UPDATE_INTERVAL, consts.CFG_DEF_UPDATE_INTERVAL)) self._dynamic_tuning = config.get_bool(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING) self._recommend_command = config.get_bool(consts.CFG_RECOMMEND_COMMAND, consts.CFG_DEF_RECOMMEND_COMMAND) self._application = application if self._sleep_interval <= 0: self._sleep_interval = int(consts.CFG_DEF_SLEEP_INTERVAL) if self._update_interval == 0: self._dynamic_tuning = False elif self._update_interval < self._sleep_interval: self._update_interval = self._sleep_interval self._sleep_cycles = self._update_interval // self._sleep_interval log.info("using sleep interval of %d second(s)" % self._sleep_interval) if self._dynamic_tuning: log.info("dynamic tuning is enabled (can be overridden by plugins)") log.info("using update interval of %d second(s) (%d times of the sleep interval)" % (self._sleep_cycles * self._sleep_interval, self._sleep_cycles)) self._profile_recommender = ProfileRecommender(is_hardcoded = not self._recommend_command) self._unit_manager = unit_manager self._profile_loader = profile_loader self._init_threads() self._cmd = commands() try: self._init_profile(profile_names) except TunedException as e: log.error("Cannot set initial profile. No tunings will be enabled: %s" % e) def _init_threads(self): self._thread = None self._terminate = threading.Event() # Flag which is set if terminating due to profile_switch self._terminate_profile_switch = threading.Event() # Flag which is set if there is no operation in progress self._not_used = threading.Event() self._not_used.set() self._profile_applied = threading.Event() def reload_profile_config(self): """Read configuration files again and load profile according to them""" self._init_profile(None) def _init_profile(self, profile_names): manual = True post_loaded_profile = self._cmd.get_post_loaded_profile() if profile_names is None: (profile_names, manual) = self._get_startup_profile() if profile_names is None: msg = "No profile is preset, running in manual mode. " if post_loaded_profile: msg += "Only post-loaded profile will be enabled" else: msg += "No profile will be enabled." log.info(msg) # Passed through '-p' cmdline option elif profile_names == "": if post_loaded_profile: log.info("Only post-loaded profile will be enabled") else: log.info("No profile will be enabled.") self._profile = None self._manual = None self._active_profiles = [] self._post_loaded_profile = None self.set_all_profiles(profile_names, manual, post_loaded_profile) def _load_profiles(self, profile_names, manual): profile_names = profile_names or "" profile_list = profile_names.split() if self._post_loaded_profile: log.info("Using post-loaded profile '%s'" % self._post_loaded_profile) profile_list = profile_list + [self._post_loaded_profile] for profile in profile_list: if profile not in self.profile_loader.profile_locator.get_known_names(): errstr = "Requested profile '%s' doesn't exist." % profile self._notify_profile_changed(profile_names, False, errstr) raise TunedException(errstr) try: if profile_list: self._profile = self._profile_loader.load(profile_list) else: self._profile = None self._manual = manual self._active_profiles = profile_names.split() except InvalidProfileException as e: errstr = "Cannot load profile(s) '%s': %s" % (" ".join(profile_list), e) self._notify_profile_changed(profile_names, False, errstr) raise TunedException(errstr) def set_profile(self, profile_names, manual): if self.is_running(): errstr = "Cannot set profile while the daemon is running." self._notify_profile_changed(profile_names, False, errstr) raise TunedException(errstr) self._load_profiles(profile_names, manual) def _set_post_loaded_profile(self, profile_name): if not profile_name: self._post_loaded_profile = None elif len(profile_name.split()) > 1: errstr = "Whitespace is not allowed in profile names; only a single post-loaded profile is allowed." raise TunedException(errstr) else: self._post_loaded_profile = profile_name def set_all_profiles(self, active_profiles, manual, post_loaded_profile, save_instantly=False): if self.is_running(): errstr = "Cannot set profile while the daemon is running." self._notify_profile_changed(active_profiles, False, errstr) raise TunedException(errstr) self._set_post_loaded_profile(post_loaded_profile) self._load_profiles(active_profiles, manual) if save_instantly: self._save_active_profile(active_profiles, manual) self._save_post_loaded_profile(post_loaded_profile) @property def profile(self): return self._profile @property def manual(self): return self._manual @property def post_loaded_profile(self): # Return the profile name only if the profile is active. If # the profile is not active, then the value is meaningless. return self._post_loaded_profile if self._profile else None @property def profile_recommender(self): return self._profile_recommender @property def profile_loader(self): return self._profile_loader # send notification when profile is changed (everything is setup) or if error occured # result: True - OK, False - error occured def _notify_profile_changed(self, profile_names, result, errstr): if self._application is not None and self._application._dbus_exporter is not None: self._application._dbus_exporter.send_signal(consts.DBUS_SIGNAL_PROFILE_CHANGED, profile_names, result, errstr) return errstr def _full_rollback_required(self): retcode, out = self._cmd.execute(["systemctl", "is-system-running"], no_errors = [0]) if retcode < 0: return False if out[:8] == "stopping": return False retcode, out = self._cmd.execute(["systemctl", "list-jobs"], no_errors = [0]) return re.search(r"\b(shutdown|reboot|halt|poweroff)\.target.*start", out) is None and not retcode def _thread_code(self): if self._profile is None: raise TunedException("Cannot start the daemon without setting a profile.") self._unit_manager.create(self._profile.units) self._save_active_profile(" ".join(self._active_profiles), self._manual) self._save_post_loaded_profile(self._post_loaded_profile) self._unit_manager.start_tuning() self._profile_applied.set() log.info("static tuning from profile '%s' applied" % self._profile.name) if self._daemon: exports.start() profile_names = " ".join(self._active_profiles) self._notify_profile_changed(profile_names, True, "OK") if self._daemon: # In python 2 interpreter with applied patch for rhbz#917709 we need to periodically # poll, otherwise the python will not have chance to update events / locks (due to GIL) # and e.g. DBus control will not work. The polling interval of 1 seconds (which is # the default) is still much better than 50 ms polling with unpatched interpreter. # For more details see TuneD rhbz#917587. _sleep_cnt = self._sleep_cycles while not self._cmd.wait(self._terminate, self._sleep_interval): if self._dynamic_tuning: _sleep_cnt -= 1 if _sleep_cnt <= 0: _sleep_cnt = self._sleep_cycles log.debug("updating monitors") self._unit_manager.update_monitors() log.debug("performing tunings") self._unit_manager.update_tuning() self._profile_applied.clear() # wait for others to complete their tasks, use timeout 3 x sleep_interval to prevent # deadlocks i = 0 while not self._cmd.wait(self._not_used, self._sleep_interval) and i < 3: i += 1 # if terminating due to profile switch if self._terminate_profile_switch.is_set(): full_rollback = True else: # with systemd it detects system shutdown and in such case it doesn't perform # full cleanup, if not shutting down it means that TuneD was explicitly # stopped by user and in such case do full cleanup, without systemd never # do full cleanup full_rollback = False if self._full_rollback_required(): if self._daemon: log.info("terminating TuneD, rolling back all changes") full_rollback = True else: log.info("terminating TuneD in one-shot mode") else: log.info("terminating TuneD due to system shutdown / reboot") if self._daemon: self._unit_manager.stop_tuning(full_rollback) self._unit_manager.destroy_all() def _save_active_profile(self, profile_names, manual): try: self._cmd.save_active_profile(profile_names, manual) except TunedException as e: log.error(str(e)) def _save_post_loaded_profile(self, profile_name): try: self._cmd.save_post_loaded_profile(profile_name) except TunedException as e: log.error(str(e)) def _get_recommended_profile(self): log.info("Running in automatic mode, checking what profile is recommended for your configuration.") profile = self._profile_recommender.recommend() log.info("Using '%s' profile" % profile) return profile def _get_startup_profile(self): profile, manual = self._cmd.get_active_profile() if manual is None: manual = profile is not None if not manual: profile = self._get_recommended_profile() return profile, manual def get_all_plugins(self): """Return all accessible plugin classes""" return self._unit_manager.plugins_repository.load_all_plugins() def get_plugin_documentation(self, plugin_name): """Return plugin class docstring""" try: plugin_class = self._unit_manager.plugins_repository.load_plugin( plugin_name ) except ImportError: return "" return plugin_class.__doc__ def get_plugin_hints(self, plugin_name): """Return plugin's parameters and their hints Parameters: plugin_name -- plugins name Return: dictionary -- {parameter_name: hint} """ try: plugin_class = self._unit_manager.plugins_repository.load_plugin( plugin_name ) except ImportError: return {} return plugin_class.get_config_options_hints() def is_enabled(self): return self._profile is not None def is_running(self): return self._thread is not None and self._thread.is_alive() def start(self): if self.is_running(): return False if self._profile is None: return False log.info("starting tuning") self._not_used.set() self._thread = threading.Thread(target=self._thread_code) self._terminate_profile_switch.clear() self._terminate.clear() self._thread.start() return True def verify_profile(self, ignore_missing): if not self.is_running(): log.error("TuneD is not running") return False if self._profile is None: log.error("no profile is set") return False if not self._profile_applied.is_set(): log.error("profile is not applied") return False # using deamon, the main loop mustn't exit before our completion self._not_used.clear() log.info("verifying profile(s): %s" % self._profile.name) ret = self._unit_manager.verify_tuning(ignore_missing) # main loop is allowed to exit self._not_used.set() return ret # profile_switch is helper telling plugins whether the stop is due to profile switch def stop(self, profile_switch = False): if not self.is_running(): return False log.info("stopping tuning") if profile_switch: self._terminate_profile_switch.set() self._terminate.set() self._thread.join() self._thread = None return True 07070100000122000081A40000000000000000000000016391BC3A00000238000000000000000000000000000000000000003000000000tuned-2.19.0.29+git.b894a3e/tuned/exceptions.pyimport tuned.logs import sys import traceback exception_logger = tuned.logs.get() class TunedException(Exception): """ """ def log(self, logger = None): if logger is None: logger = exception_logger logger.error(str(self)) self._log_trace(logger) def _log_trace(self, logger): (exc_type, exc_value, exc_traceback) = sys.exc_info() if exc_value != self: logger.debug("stack trace is no longer available") else: exception_info = "".join(traceback.format_exception(exc_type, exc_value, exc_traceback)).rstrip() logger.debug(exception_info) 07070100000123000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002A00000000tuned-2.19.0.29+git.b894a3e/tuned/exports07070100000124000081A40000000000000000000000016391BC3A000003F6000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/exports/__init__.pyfrom . import interfaces from . import controller from . import dbus_exporter as dbus def export(*args, **kwargs): """Decorator, use to mark exportable methods.""" def wrapper(method): method.export_params = [ args, kwargs ] return method return wrapper def signal(*args, **kwargs): """Decorator, use to mark exportable signals.""" def wrapper(method): method.signal_params = [ args, kwargs ] return method return wrapper def register_exporter(instance): if not isinstance(instance, interfaces.ExporterInterface): raise Exception() ctl = controller.ExportsController.get_instance() return ctl.register_exporter(instance) def register_object(instance): if not isinstance(instance, interfaces.ExportableInterface): raise Exception() ctl = controller.ExportsController.get_instance() return ctl.register_object(instance) def start(): ctl = controller.ExportsController.get_instance() return ctl.start() def stop(): ctl = controller.ExportsController.get_instance() return ctl.stop() 07070100000125000081A40000000000000000000000016391BC3A000007A2000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/exports/controller.pyfrom . import interfaces import inspect import tuned.patterns class ExportsController(tuned.patterns.Singleton): """ Controls and manages object interface exporting. """ def __init__(self): super(ExportsController, self).__init__() self._exporters = [] self._objects = [] self._exports_initialized = False def register_exporter(self, instance): """Register objects exporter.""" self._exporters.append(instance) def register_object(self, instance): """Register object to be exported.""" self._objects.append(instance) def _is_exportable_method(self, method): """Check if method was marked with @exports.export wrapper.""" return inspect.ismethod(method) and hasattr(method, "export_params") def _is_exportable_signal(self, method): """Check if method was marked with @exports.signal wrapper.""" return inspect.ismethod(method) and hasattr(method, "signal_params") def _export_method(self, method): """Register method to all exporters.""" for exporter in self._exporters: args = method.export_params[0] kwargs = method.export_params[1] exporter.export(method, *args, **kwargs) def _export_signal(self, method): """Register signal to all exporters.""" for exporter in self._exporters: args = method.signal_params[0] kwargs = method.signal_params[1] exporter.signal(method, *args, **kwargs) def _initialize_exports(self): if self._exports_initialized: return for instance in self._objects: for name, method in inspect.getmembers(instance, self._is_exportable_method): self._export_method(method) for name, method in inspect.getmembers(instance, self._is_exportable_signal): self._export_signal(method) self._exports_initialized = True def start(self): """Start the exports.""" self._initialize_exports() for exporter in self._exporters: exporter.start() def stop(self): """Stop the exports.""" for exporter in self._exporters: exporter.stop() 07070100000126000081A40000000000000000000000016391BC3A0000181E000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/exports/dbus_exporter.pyfrom . import interfaces import dbus.service import dbus.mainloop.glib import dbus.exceptions import threading import signal import tuned.logs import tuned.consts as consts from inspect import ismethod from tuned.utils.polkit import polkit from gi.repository import GLib from types import FunctionType try: # Python3 version # getfullargspec is not present in Python2, so when we drop P2 support # replace "getargspec(func)" in code with "getfullargspec(func).args" from inspect import getfullargspec def getargspec(func): return getfullargspec(func) except ImportError: # Python2 version, drop after support stops from inspect import getargspec log = tuned.logs.get() class DBusExporter(interfaces.ExporterInterface): """ Export method calls through DBus Interface. We take a method to be exported and create a simple wrapper function to call it. This is required as we need the original function to be bound to the original object instance. While the wrapper will be bound to an object we dynamically construct. """ def __init__(self, bus_name, interface_name, object_name): dbus.mainloop.glib.DBusGMainLoop(set_as_default=True) self._dbus_object_cls = None self._dbus_object = None self._dbus_methods = {} self._signals = set() self._bus_name = bus_name self._interface_name = interface_name self._object_name = object_name self._thread = None self._bus_object = None self._polkit = polkit() # dirty hack that fixes KeyboardInterrupt handling # the hack is needed because PyGObject / GTK+-3 developers are morons signal_handler = signal.getsignal(signal.SIGINT) self._main_loop = GLib.MainLoop() signal.signal(signal.SIGINT, signal_handler) @property def bus_name(self): return self._bus_name @property def interface_name(self): return self._interface_name @property def object_name(self): return self._object_name def running(self): return self._thread is not None def _prepare_for_dbus(self, method, wrapper): source = """def {name}({args}): return wrapper({args}) """.format(name=method.__name__, args=', '.join(getargspec(method.__func__).args)) code = compile(source, '<decorator-gen-%d>' % len(self._dbus_methods), 'exec') # https://docs.python.org/3.9/library/inspect.html # co_consts - tuple of constants used in the bytecode # example: # compile("e=2\ndef f(x):\n return x*2\n", "X", 'exec').co_consts # (2, <code object f at 0x7f8c60c65330, file "X", line 2>, None) # Because we have only one object in code (our function), we can use code.co_consts[0] func = FunctionType(code.co_consts[0], locals(), method.__name__) return func def export(self, method, in_signature, out_signature): if not ismethod(method): raise Exception("Only bound methods can be exported.") method_name = method.__name__ if method_name in self._dbus_methods: raise Exception("Method with this name is already exported.") def wrapper(owner, *args, **kwargs): action_id = consts.NAMESPACE + "." + method.__name__ caller = args[-1] log.debug("checking authorization for for action '%s' requested by caller '%s'" % (action_id, caller)) ret = self._polkit.check_authorization(caller, action_id) args_copy = args if ret == 1: log.debug("action '%s' requested by caller '%s' was successfully authorized by polkit" % (action_id, caller)) elif ret == 2: log.warn("polkit error, but action '%s' requested by caller '%s' was successfully authorized by fallback method" % (action_id, caller)) elif ret == 0: log.info("action '%s' requested by caller '%s' wasn't authorized, ignoring the request" % (action_id, caller)) args_copy = list(args[:-1]) + [""] elif ret == -1: log.warn("polkit error and action '%s' requested by caller '%s' wasn't authorized by fallback method, ignoring the request" % (action_id, caller)) args_copy = list(args[:-1]) + [""] else: log.error("polkit error and unable to use fallback method to authorize action '%s' requested by caller '%s', ignoring the request" % (action_id, caller)) args_copy = list(args[:-1]) + [""] return method(*args_copy, **kwargs) wrapper = self._prepare_for_dbus(method, wrapper) wrapper = dbus.service.method(self._interface_name, in_signature, out_signature, sender_keyword = "caller")(wrapper) self._dbus_methods[method_name] = wrapper def signal(self, method, out_signature): if not ismethod(method): raise Exception("Only bound methods can be exported.") method_name = method.__name__ if method_name in self._dbus_methods: raise Exception("Method with this name is already exported.") def wrapper(owner, *args, **kwargs): return method(*args, **kwargs) wrapper = self._prepare_for_dbus(method, wrapper) wrapper = dbus.service.signal(self._interface_name, out_signature)(wrapper) self._dbus_methods[method_name] = wrapper self._signals.add(method_name) def send_signal(self, signal, *args, **kwargs): err = False if not signal in self._signals or self._bus_object is None: err = True try: method = getattr(self._bus_object, signal) except AttributeError: err = True if err: raise Exception("Signal '%s' doesn't exist." % signal) else: method(*args, **kwargs) def _construct_dbus_object_class(self): if self._dbus_object_cls is not None: raise Exception("The exporter class was already build.") unique_name = "DBusExporter_%d" % id(self) cls = type(unique_name, (dbus.service.Object,), self._dbus_methods) self._dbus_object_cls = cls def start(self): if self.running(): return if self._dbus_object_cls is None: self._construct_dbus_object_class() self.stop() bus = dbus.SystemBus() bus_name = dbus.service.BusName(self._bus_name, bus) self._bus_object = self._dbus_object_cls(bus, self._object_name, bus_name) self._thread = threading.Thread(target=self._thread_code) self._thread.start() def stop(self): if self._thread is not None and self._thread.is_alive(): self._main_loop.quit() self._thread.join() self._thread = None def _thread_code(self): self._main_loop.run() del self._bus_object self._bus_object = None 07070100000127000081A40000000000000000000000016391BC3A0000022B000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/exports/interfaces.pyclass ExportableInterface(object): pass class ExporterInterface(object): def export(self, method, in_signature, out_signature): # to be overridden by concrete implementation raise NotImplementedError() def signal(self, method, out_signature): # to be overridden by concrete implementation raise NotImplementedError() def send_signal(self, signal, *args, **kwargs): # to be overridden by concrete implementation raise NotImplementedError() def start(self): raise NotImplementedError() def stop(self): raise NotImplementedError() 07070100000128000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002600000000tuned-2.19.0.29+git.b894a3e/tuned/gtk07070100000129000081A40000000000000000000000016391BC3A00000000000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/__init__.py0707010000012A000081A40000000000000000000000016391BC3A00000D8F000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/gui_plugin_loader.py# -*- coding: utf-8 -*- # Copyright (C) 2008-2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada <jskarvad@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Mar 30, 2014 @author: mstana ''' import importlib import tuned.consts as consts import tuned.logs from tuned.utils.config_parser import ConfigParser, Error from tuned.exceptions import TunedException from tuned.utils.global_config import GlobalConfig from tuned.admin.dbus_controller import DBusController __all__ = ['GuiPluginLoader'] class GuiPluginLoader(): ''' Class for scan, import and load actual avaible plugins. ''' def __init__(self): ''' Constructor ''' self._plugins = {} self.plugins_doc = {} self._prefix = 'plugin_' self._sufix = '.py' self._dbus_controller = DBusController(consts.DBUS_BUS, consts.DBUS_INTERFACE, consts.DBUS_OBJECT ) self._get_plugins() @property def plugins(self): return self._plugins def _get_plugins(self): self._plugins = self._dbus_controller.get_plugins() def get_plugin_doc(self, plugin_name): return self._dbus_controller.get_plugin_documentation(plugin_name) def get_plugin_hints(self, plugin_name): return self._dbus_controller.get_plugin_hints(plugin_name) def _load_global_config(self, file_name=consts.GLOBAL_CONFIG_FILE): """ Loads global configuration file. """ try: config_parser = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) config_parser.optionxform = str with open(file_name) as f: config_parser.read_string("[" + consts.MAGIC_HEADER_NAME + "]\n" + f.read(), file_name) config, functions = GlobalConfig.get_global_config_spec() for option in config_parser.options(consts.MAGIC_HEADER_NAME): if option in config: try: func = getattr(config_parser, functions[option]) config[option] = func(consts.MAGIC_HEADER_NAME, option) except Error: raise TunedException("Global TuneD configuration file '%s' is not valid." % file_name) else: config[option] = config_parser.get(consts.MAGIC_HEADER_NAME, option, raw=True) except IOError as e: raise TunedException("Global TuneD configuration file '%s' not found." % file_name) except Error as e: raise TunedException("Error parsing global TuneD configuration file '%s'." % file_name) return config 0707010000012B000081A40000000000000000000000016391BC3A00000093000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/gui_profile_deleter.pyimport os import sys import shutil if __name__ == '__main__': shutil.rmtree('/etc/tuned/%s' % (os.path.basename(os.path.abspath(sys.argv[1])))) 0707010000012C000081A40000000000000000000000016391BC3A00002011000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/gui_profile_loader.py# -*- coding: utf-8 -*- # Copyright (C) 2008-2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada <jskarvad@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Mar 13, 2014 @author: mstana ''' import os from tuned.utils.config_parser import ConfigParser, Error import subprocess import json import sys import collections import tuned.profiles.profile as p import tuned.consts import shutil import tuned.gtk.managerException as managerException import tuned.gtk.gui_profile_saver import tuned.gtk.gui_profile_deleter class GuiProfileLoader(object): """ Profiles loader for GUI Gtk purposes. """ profiles = {} def __init__(self, directories): self.directories = directories self._load_all_profiles() def get_raw_profile(self, profile_name): file = self._locate_profile_path(profile_name) + '/' \ + profile_name + '/' + tuned.consts.PROFILE_FILE with open(file, 'r') as f: return f.read() def set_raw_profile(self, profile_name, config): profilePath = self._locate_profile_path(profile_name) if profilePath == tuned.consts.LOAD_DIRECTORIES[1]: file_path = profilePath + '/' + profile_name + '/' + tuned.consts.PROFILE_FILE config_parser = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) config_parser.optionxform = str config_parser.read_string(config) config_obj = { 'main': collections.OrderedDict(), 'filename': file_path, 'initial_comment': ('#', '# tuned configuration', '#') } for s in config_parser.sections(): config_obj['main'][s] = collections.OrderedDict() for o in config_parser.options(s): config_obj['main'][s][o] = config_parser.get(s, o, raw=True) self._save_profile(config_obj) self._refresh_profiles() else: raise managerException.ManagerException(profile_name + ' profile is stored in ' + profilePath + ' and can not be storet do this location') def load_profile_config(self, profile_name, path): conf_path = path + '/' + profile_name + '/' + tuned.consts.PROFILE_FILE config = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) config.optionxform = str profile_config = collections.OrderedDict() with open(conf_path) as f: config.read_file(f, conf_path) for s in config.sections(): profile_config[s] = collections.OrderedDict() for o in config.options(s): profile_config[s][o] = config.get(s, o, raw=True) return profile_config def _locate_profile_path(self, profile_name): for d in self.directories: for profile in os.listdir(d): if os.path.isdir(d + '/' + profile) and profile \ == profile_name: path = d return path def _load_all_profiles(self): for d in self.directories: for profile in os.listdir(d): if self._is_dir_profile(os.path.join(d, profile)): try: self.profiles[profile] = p.Profile(profile, self.load_profile_config(profile, d)) except Error: pass # print "can not make \""+ profile +"\" profile without correct config on path: " + d # except:StringIO # raise managerException.ManagerException("Can not make profile") # print "can not make \""+ profile +"\" profile without correct config with path: " + d def _is_dir_profile(self, path): return (os.path.isdir(path) and os.path.isfile(os.path.join(path, 'tuned.conf'))) def _refresh_profiles(self): self.profiles = {} self._load_all_profiles() def save_profile(self, profile): path = tuned.consts.LOAD_DIRECTORIES[1] + '/' + profile.name config = { 'main': collections.OrderedDict(), 'filename': path + '/' + tuned.consts.PROFILE_FILE, 'initial_comment': ('#', '# tuned configuration', '#') } try: config['main']['main'] = profile.options except KeyError: config['main']['main'] = {} # profile dont have main section pass for (name, unit) in list(profile.units.items()): config['main'][name] = unit.options self._save_profile(config) self._refresh_profiles() def update_profile( self, old_profile_name, profile, is_admin, ): if old_profile_name not in self.get_names(): raise managerException.ManagerException('Profile: ' + old_profile_name + ' is not in profiles') path = tuned.consts.LOAD_DIRECTORIES[1] + '/' + profile.name if old_profile_name != profile.name: self.remove_profile(old_profile_name, is_admin=is_admin) config = { 'main': collections.OrderedDict(), 'filename': path + '/' + tuned.consts.PROFILE_FILE, 'initial_comment': ('#', '# tuned configuration', '#') } try: config['main']['main'] = profile.options except KeyError: # profile dont have main section pass for (name, unit) in list(profile.units.items()): config['main'][name] = unit.options self._save_profile(config) self._refresh_profiles() def get_names(self): self._refresh_profiles() return list(self.profiles.keys()) def get_profile(self, profile): self._refresh_profiles() return self.profiles.get(profile, None) def add_profile(self, profile): self.profiles[profile.name] = profile self.save_profile(profile) def remove_profile(self, profile_name, is_admin): profile_path = self._locate_profile_path(profile_name) if self.is_profile_removable(profile_name): self._delete_profile(profile_name) self._load_all_profiles() else: raise managerException.ManagerException(profile_name + ' profile is stored in ' + profile_path) def is_profile_removable(self, profile_name): # profile is in /etc/profile profile_path = self._locate_profile_path(profile_name) if profile_path == tuned.consts.LOAD_DIRECTORIES[1]: return True else: return False def is_profile_factory(self, profile_name): # profile is in /usr/lib/tuned return not self.is_profile_removable(profile_name) def _save_profile(self, config): ec = subprocess.call(['pkexec', sys.executable, tuned.gtk.gui_profile_saver.__file__ , json.dumps(config)]) if (ec != 0): raise managerException.ManagerException( 'Error while saving profile file "%s"' % (config['filename'])) def _delete_profile(self, profile_name): ec = subprocess.call(['pkexec', sys.executable, tuned.gtk.gui_profile_deleter.__file__ , profile_name]) if (ec != 0): raise managerException.ManagerException( 'Error while deleting profile "%s"' % (profile_name)) 0707010000012D000081A40000000000000000000000016391BC3A0000037A000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/gui_profile_saver.pyimport os import sys import json from tuned.utils.config_parser import ConfigParser if __name__ == "__main__": profile_dict = json.loads(str(sys.argv[1])) if not os.path.exists(profile_dict['filename']): os.makedirs(os.path.dirname(profile_dict['filename'])) profile_configobj = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) profile_configobj.optionxform = str for section, options in profile_dict['main'].items(): profile_configobj.add_section(section) for option, value in options.items(): profile_configobj.set(section, option, value) path = os.path.join('/etc','tuned',os.path.dirname(os.path.abspath(profile_dict['filename'])),'tuned.conf') with open(path, 'w') as f: profile_configobj.write(f) with open(path, 'r+') as f: content = f.read() f.seek(0, 0) f.write("\n".join(profile_dict['initial_comment']) + "\n" + content) sys.exit(0) 0707010000012E000081A40000000000000000000000016391BC3A000004B9000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/managerException.py# -*- coding: utf-8 -*- # Copyright (C) 2008-2014 Red Hat, Inc. # Authors: Marek Staňa, Jaroslav Škarvada <jskarvad@redhat.com> # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # ''' Created on Apr 6, 2014 @author: mstana ''' class ManagerException(Exception): """ """ def __init__(self, code): self.code = code def __str__(self): return repr(self.code) def profile_already_exists(self, text=None): if text is None: return repr(self.code) else: return repr(text) 0707010000012F000081A40000000000000000000000016391BC3A00000266000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/gtk/tuned_dialog.pyfrom gi.repository import Gtk GLADEUI = '/usr/share/tuned/ui/tuned-gui.glade' class TunedDialog(): def __init__(self, msg, yes_button_text, no_button_text): self._builder = Gtk.Builder() self._builder.add_from_file(GLADEUI) self._builder.get_object("labelQuestionYesNoDialog").set_text(msg) self._builder.get_object("buttonPositiveYesNoDialog").set_label( yes_button_text ) self._builder.get_object("buttonNegativeYesNoDialog").set_label( no_button_text ) def run(self): val = self._builder.get_object("dialogYesNo").run() self._builder.get_object("dialogYesNo").hide() return val 07070100000130000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/tuned/hardware07070100000131000081A40000000000000000000000016391BC3A0000005A000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/tuned/hardware/__init__.pyfrom .inventory import * from .device_matcher import * from .device_matcher_udev import * 07070100000132000081A40000000000000000000000016391BC3A00000628000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/tuned/hardware/device_matcher.pyimport fnmatch import re __all__ = ["DeviceMatcher"] class DeviceMatcher(object): """ Device name matching against the devices specification in tuning profiles. The devices specification consists of multiple rules separated by spaces. The rules have a syntax of shell-style wildcards and are either positive or negative. The negative rules are prefixed with an exclamation mark. """ def match(self, rules, device_name): """ Match a device against the specification in the profile. If there is no positive rule in the specification, implicit rule which matches all devices is added. The device matches if and only if it matches some positive rule, but no negative rule. """ if isinstance(rules, str): rules = re.split(r"\s|,\s*", rules) positive_rules = [rule for rule in rules if not rule.startswith("!") and not rule.strip() == ''] negative_rules = [rule[1:] for rule in rules if rule not in positive_rules] if len(positive_rules) == 0: positive_rules.append("*") matches = False for rule in positive_rules: if fnmatch.fnmatch(device_name, rule): matches = True break for rule in negative_rules: if fnmatch.fnmatch(device_name, rule): matches = False break return matches def match_list(self, rules, device_list): """ Match a device list against the specification in the profile. Returns the list, which is a subset of devices which match. """ matching_devices = [] for device in device_list: if self.match(rules, device): matching_devices.append(device) return matching_devices 07070100000133000081A40000000000000000000000016391BC3A00000211000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/tuned/hardware/device_matcher_udev.pyfrom . import device_matcher import re __all__ = ["DeviceMatcherUdev"] class DeviceMatcherUdev(device_matcher.DeviceMatcher): def match(self, regex, device): """ Match a device against the udev regex in tuning profiles. device is a pyudev.Device object """ properties = '' try: items = device.properties.items() except AttributeError: items = device.items() for key, val in sorted(list(items)): properties += key + '=' + val + '\n' return re.search(regex, properties, re.MULTILINE) is not None 07070100000134000081A40000000000000000000000016391BC3A00000F33000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/hardware/inventory.pyimport pyudev import tuned.logs from tuned import consts __all__ = ["Inventory"] log = tuned.logs.get() class Inventory(object): """ Inventory object can handle information about available hardware devices. It also informs the plugins about related hardware events. """ def __init__(self, udev_context=None, udev_monitor_cls=None, monitor_observer_factory=None, buffer_size=None, set_receive_buffer_size=True): if udev_context is not None: self._udev_context = udev_context else: self._udev_context = pyudev.Context() if udev_monitor_cls is None: udev_monitor_cls = pyudev.Monitor self._udev_monitor = udev_monitor_cls.from_netlink(self._udev_context) if buffer_size is None: buffer_size = consts.CFG_DEF_UDEV_BUFFER_SIZE if (set_receive_buffer_size): try: self._udev_monitor.set_receive_buffer_size(buffer_size) except EnvironmentError: log.warn("cannot set udev monitor receive buffer size, we are probably running inside " + "container or with limited capabilites, TuneD functionality may be limited") if monitor_observer_factory is None: monitor_observer_factory = _MonitorObserverFactory() self._monitor_observer_factory = monitor_observer_factory self._monitor_observer = None self._subscriptions = {} def get_device(self, subsystem, sys_name): """Get a pyudev.Device object for the sys_name (e.g. 'sda').""" try: return pyudev.Devices.from_name(self._udev_context, subsystem, sys_name) # workaround for pyudev < 0.18 except AttributeError: return pyudev.Device.from_name(self._udev_context, subsystem, sys_name) def get_devices(self, subsystem): """Get list of devices on a given subsystem.""" return self._udev_context.list_devices(subsystem=subsystem) def _handle_udev_event(self, event, device): if not device.subsystem in self._subscriptions: return for (plugin, callback) in self._subscriptions[device.subsystem]: try: callback(event, device) except Exception as e: log.error("Exception occured in event handler of '%s'." % plugin) log.exception(e) def subscribe(self, plugin, subsystem, callback): """Register handler of device events on a given subsystem.""" log.debug("adding handler: %s (%s)" % (subsystem, plugin)) callback_data = (plugin, callback) if subsystem in self._subscriptions: self._subscriptions[subsystem].append(callback_data) else: self._subscriptions[subsystem] = [callback_data, ] self._udev_monitor.filter_by(subsystem) # After start(), HW events begin to get queued up self._udev_monitor.start() def start_processing_events(self): if self._monitor_observer is None: log.debug("starting monitor observer") self._monitor_observer = self._monitor_observer_factory.create(self._udev_monitor, self._handle_udev_event) self._monitor_observer.start() def stop_processing_events(self): if self._monitor_observer is not None: log.debug("stopping monitor observer") self._monitor_observer.stop() self._monitor_observer = None def _unsubscribe_subsystem(self, plugin, subsystem): for callback_data in self._subscriptions[subsystem]: (_plugin, callback) = callback_data if plugin == _plugin: log.debug("removing handler: %s (%s)" % (subsystem, plugin)) self._subscriptions[subsystem].remove(callback_data) def unsubscribe(self, plugin, subsystem=None): """Unregister handler registered with subscribe method.""" empty_subsystems = [] for _subsystem in self._subscriptions: if subsystem is None or _subsystem == subsystem: self._unsubscribe_subsystem(plugin, _subsystem) if len(self._subscriptions[_subsystem]) == 0: empty_subsystems.append(_subsystem) for _subsystem in empty_subsystems: del self._subscriptions[_subsystem] class _MonitorObserverFactory(object): def create(self, *args, **kwargs): return pyudev.MonitorObserver(*args, **kwargs) 07070100000135000081A40000000000000000000000016391BC3A00000EFB000000000000000000000000000000000000002A00000000tuned-2.19.0.29+git.b894a3e/tuned/logs.pyimport atexit import logging import logging.handlers import os import os.path import inspect import tuned.consts as consts import random import string import threading try: from StringIO import StringIO except: from io import StringIO __all__ = ["get"] root_logger = None log_handlers = {} log_handlers_lock = threading.Lock() class LogHandler(object): def __init__(self, handler, stream): self.handler = handler self.stream = stream def _random_string(length): r = random.SystemRandom() chars = string.ascii_letters + string.digits res = "" for i in range(length): res += r.choice(chars) return res def log_capture_start(log_level): with log_handlers_lock: for i in range(10): token = _random_string(16) if token not in log_handlers: break else: return None stream = StringIO() handler = logging.StreamHandler(stream) handler.setLevel(log_level) formatter = logging.Formatter( "%(levelname)-8s %(name)s: %(message)s") handler.setFormatter(formatter) root_logger.addHandler(handler) log_handler = LogHandler(handler, stream) log_handlers[token] = log_handler root_logger.debug("Added log handler %s." % token) return token def log_capture_finish(token): with log_handlers_lock: try: log_handler = log_handlers[token] except KeyError: return None content = log_handler.stream.getvalue() log_handler.stream.close() root_logger.removeHandler(log_handler.handler) del log_handlers[token] root_logger.debug("Removed log handler %s." % token) return content def get(): global root_logger if root_logger is None: root_logger = logging.getLogger("tuned") calling_module = inspect.currentframe().f_back name = calling_module.f_locals["__name__"] if name == "__main__": name = "tuned" return root_logger elif name.startswith("tuned."): (root, child) = name.split(".", 1) child_logger = root_logger.getChild(child) child_logger.remove_all_handlers() child_logger.setLevel("NOTSET") return child_logger else: assert False class TunedLogger(logging.getLoggerClass()): """Custom TuneD daemon logger class.""" _formatter = logging.Formatter("%(asctime)s %(levelname)-8s %(name)s: %(message)s") _console_handler = None _file_handler = None def __init__(self, *args, **kwargs): super(TunedLogger, self).__init__(*args, **kwargs) self.setLevel(logging.INFO) self.switch_to_console() def console(self, msg, *args, **kwargs): self.log(consts.LOG_LEVEL_CONSOLE, msg, *args, **kwargs) def switch_to_console(self): self._setup_console_handler() self.remove_all_handlers() self.addHandler(self._console_handler) def switch_to_file(self, filename = consts.LOG_FILE, maxBytes = consts.LOG_FILE_MAXBYTES, backupCount = consts.LOG_FILE_COUNT): self._setup_file_handler(filename, maxBytes, backupCount) self.remove_all_handlers() self.addHandler(self._file_handler) def remove_all_handlers(self): _handlers = self.handlers for handler in _handlers: self.removeHandler(handler) @classmethod def _setup_console_handler(cls): if cls._console_handler is not None: return cls._console_handler = logging.StreamHandler() cls._console_handler.setFormatter(cls._formatter) @classmethod def _setup_file_handler(cls, filename, maxBytes, backupCount): if cls._file_handler is not None: return log_directory = os.path.dirname(filename) if log_directory == '': log_directory = '.' if not os.path.exists(log_directory): os.makedirs(log_directory) cls._file_handler = logging.handlers.RotatingFileHandler( filename, maxBytes = int(maxBytes), backupCount = int(backupCount)) cls._file_handler.setFormatter(cls._formatter) logging.addLevelName(consts.LOG_LEVEL_CONSOLE, consts.LOG_LEVEL_CONSOLE_NAME) logging.setLoggerClass(TunedLogger) atexit.register(logging.shutdown) 07070100000136000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/tuned/monitors07070100000137000081A40000000000000000000000016391BC3A0000002E000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/tuned/monitors/__init__.pyfrom .base import * from .repository import * 07070100000138000081A40000000000000000000000016391BC3A00000B6A000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tuned/monitors/base.pyimport tuned.logs log = tuned.logs.get() __all__ = ["Monitor"] class Monitor(object): """ Base class for all monitors. Monitors provide data about the running system to Plugin objects, which use the data to tune system parameters. Following methods require reimplementation: - _init_available_devices(cls) - update(cls) """ # class properties @classmethod def _init_class(cls): cls._class_initialized = False cls._instances = set() cls._available_devices = set() cls._updating_devices = set() cls._load = {} cls._init_available_devices() assert isinstance(cls._available_devices, set) cls._class_initialized = True log.debug("available devices: %s" % ", ".join(cls._available_devices)) @classmethod def _init_available_devices(cls): raise NotImplementedError() @classmethod def _update_available_devices(cls): cls._init_available_devices() log.debug("available devices updated to: %s" % ", ".join(cls._available_devices)) @classmethod def get_available_devices(cls): return cls._available_devices @classmethod def update(cls): raise NotImplementedError() @classmethod def _register_instance(cls, instance): cls._instances.add(instance) @classmethod def _deregister_instance(cls, instance): cls._instances.remove(instance) @classmethod def _refresh_updating_devices(cls): new_updating = set() for instance in cls._instances: new_updating |= instance.devices cls._updating_devices.clear() cls._updating_devices.update(new_updating) @classmethod def instances(cls): return cls._instances # instance properties def __init__(self, devices = None): if not hasattr(self, "_class_initialized"): self._init_class() assert hasattr(self, "_class_initialized") self._register_instance(self) if devices is not None: self.devices = devices else: self.devices = self.get_available_devices() self.update() def __del__(self): try: self.cleanup() except: pass def cleanup(self): self._deregister_instance(self) self._refresh_updating_devices() @property def devices(self): return self._devices @devices.setter def devices(self, value): new_devices = self._available_devices & set(value) self._devices = new_devices self._refresh_updating_devices() def add_device(self, device): assert (isinstance(device,str) or isinstance(device,unicode)) self._update_available_devices() if device in self._available_devices: self._devices.add(device) self._updating_devices.add(device) def remove_device(self, device): assert (isinstance(device,str) or isinstance(device,unicode)) if device in self._devices: self._devices.remove(device) self._updating_devices.remove(device) def get_load(self): return dict([dev_load for dev_load in list(self._load.items()) if dev_load[0] in self._devices]) def get_device_load(self, device): return self._load.get(device, None) 07070100000139000081A40000000000000000000000016391BC3A00000376000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/monitors/monitor_disk.pyimport tuned.monitors import os class DiskMonitor(tuned.monitors.Monitor): _supported_vendors = ["ATA", "SCSI"] @classmethod def _init_available_devices(cls): block_devices = os.listdir("/sys/block") available = set(filter(cls._is_device_supported, block_devices)) cls._available_devices = available for d in available: cls._load[d] = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] @classmethod def _is_device_supported(cls, device): vendor_file = "/sys/block/%s/device/vendor" % device try: vendor = open(vendor_file).read().strip() except IOError: return False return vendor in cls._supported_vendors @classmethod def update(cls): for device in cls._updating_devices: cls._update_disk(device) @classmethod def _update_disk(cls, dev): with open("/sys/block/" + dev + "/stat") as statfile: cls._load[dev] = list(map(int, statfile.read().split())) 0707010000013A000081A40000000000000000000000016391BC3A00000132000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/monitors/monitor_load.pyimport tuned.monitors class LoadMonitor(tuned.monitors.Monitor): @classmethod def _init_available_devices(cls): cls._available_devices = set(["system"]) @classmethod def update(cls): with open("/proc/loadavg") as statfile: data = statfile.read().split() cls._load["system"] = float(data[0]) 0707010000013B000081A40000000000000000000000016391BC3A00000493000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/tuned/monitors/monitor_net.pyimport tuned.monitors import os import re from tuned.utils.nettool import ethcard class NetMonitor(tuned.monitors.Monitor): @classmethod def _init_available_devices(cls): available = [] for root, dirs, files in os.walk("/sys/devices"): if root.endswith("/net") and not root.endswith("/virtual/net"): available += dirs cls._available_devices = set(available) for dev in available: #max_speed = cls._calcspeed(ethcard(dev).get_max_speed()) cls._load[dev] = ['0', '0', '0', '0'] @classmethod def _calcspeed(cls, speed): # 0.6 is just a magical constant (empirical value): Typical workload on netcard won't exceed # that and if it does, then the code is smart enough to adapt it. # 1024 * 1024 as for MB -> B # speed / 8 Mb -> MB return (int) (0.6 * 1024 * 1024 * speed / 8) @classmethod def _updateStat(cls, dev): files = ["rx_bytes", "rx_packets", "tx_bytes", "tx_packets"] for i,f in enumerate(files): with open("/sys/class/net/" + dev + "/statistics/" + f) as statfile: cls._load[dev][i] = statfile.read().strip() @classmethod def update(cls): for device in cls._updating_devices: cls._updateStat(device) 0707010000013C000081A40000000000000000000000016391BC3A0000033F000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tuned/monitors/repository.pyimport tuned.logs import tuned.monitors from tuned.utils.plugin_loader import PluginLoader log = tuned.logs.get() __all__ = ["Repository"] class Repository(PluginLoader): def __init__(self): super(Repository, self).__init__() self._monitors = set() @property def monitors(self): return self._monitors def _set_loader_parameters(self): self._namespace = "tuned.monitors" self._prefix = "monitor_" self._interface = tuned.monitors.Monitor def create(self, plugin_name, devices): log.debug("creating monitor %s" % plugin_name) monitor_cls = self.load_plugin(plugin_name) monitor_instance = monitor_cls(devices) self._monitors.add(monitor_instance) return monitor_instance def delete(self, monitor): assert isinstance(monitor, self._interface) monitor.cleanup() self._monitors.remove(monitor) 0707010000013D000081A40000000000000000000000016391BC3A0000014F000000000000000000000000000000000000002E00000000tuned-2.19.0.29+git.b894a3e/tuned/patterns.pyclass Singleton(object): """ Singleton design pattern. """ _instance = None def __init__(self): if self.__class__ is Singleton: raise TypeError("Cannot instantiate directly.") @classmethod def get_instance(cls): """Get the class instance.""" if cls._instance is None: cls._instance = cls() return cls._instance 0707010000013E000041ED0000000000000000000000036391BC3A00000000000000000000000000000000000000000000002A00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins0707010000013F000081A40000000000000000000000016391BC3A00000031000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/__init__.pyfrom .repository import * from . import instance 07070100000140000081A40000000000000000000000016391BC3A000056D0000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/base.pyimport re import tuned.consts as consts import tuned.profiles.variables import tuned.logs import collections from tuned.utils.commands import commands import os from subprocess import Popen, PIPE log = tuned.logs.get() class Plugin(object): """ Base class for all plugins. Plugins change various system settings in order to get desired performance or power saving. Plugins use Monitor objects to get information from the running system. Intentionally a lot of logic is included in the plugin to increase plugin flexibility. """ def __init__(self, monitors_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, instance_factory, global_cfg, variables): """Plugin constructor.""" self._storage = storage_factory.create(self.__class__.__name__) self._monitors_repository = monitors_repository self._hardware_inventory = hardware_inventory self._device_matcher = device_matcher self._device_matcher_udev = device_matcher_udev self._instance_factory = instance_factory self._instances = collections.OrderedDict() self._init_commands() self._global_cfg = global_cfg self._variables = variables self._has_dynamic_options = False self._devices_inited = False self._options_used_by_dynamic = self._get_config_options_used_by_dynamic() self._cmd = commands() def cleanup(self): self.destroy_instances() def init_devices(self): if not self._devices_inited: self._init_devices() self._devices_inited = True @property def name(self): return self.__class__.__module__.split(".")[-1].split("_", 1)[1] # # Plugin configuration manipulation and helpers. # @classmethod def _get_config_options(self): """Default configuration options for the plugin.""" return {} @classmethod def get_config_options_hints(cls): """Explanation of each config option function""" return {} @classmethod def _get_config_options_used_by_dynamic(self): """List of config options used by dynamic tuning. Their previous values will be automatically saved and restored.""" return [] def _get_effective_options(self, options): """Merge provided options with plugin default options.""" # TODO: _has_dynamic_options is a hack effective = self._get_config_options().copy() for key in options: if key in effective or self._has_dynamic_options: effective[key] = options[key] else: log.warn("Unknown option '%s' for plugin '%s'." % (key, self.__class__.__name__)) return effective def _option_bool(self, value): if type(value) is bool: return value value = str(value).lower() return value == "true" or value == "1" # # Interface for manipulation with instances of the plugin. # def create_instance(self, name, devices_expression, devices_udev_regex, script_pre, script_post, options): """Create new instance of the plugin and seize the devices.""" if name in self._instances: raise Exception("Plugin instance with name '%s' already exists." % name) effective_options = self._get_effective_options(options) instance = self._instance_factory.create(self, name, devices_expression, devices_udev_regex, \ script_pre, script_post, effective_options) self._instances[name] = instance return instance def destroy_instance(self, instance): """Destroy existing instance.""" if instance._plugin != self: raise Exception("Plugin instance '%s' does not belong to this plugin '%s'." % (instance, self)) if instance.name not in self._instances: raise Exception("Plugin instance '%s' was already destroyed." % instance) instance = self._instances[instance.name] self._destroy_instance(instance) del self._instances[instance.name] def initialize_instance(self, instance): """Initialize an instance.""" log.debug("initializing instance %s (%s)" % (instance.name, self.name)) self._instance_init(instance) def destroy_instances(self): """Destroy all instances.""" for instance in list(self._instances.values()): log.debug("destroying instance %s (%s)" % (instance.name, self.name)) self._destroy_instance(instance) self._instances.clear() def _destroy_instance(self, instance): self.release_devices(instance) self._instance_cleanup(instance) def _instance_init(self, instance): raise NotImplementedError() def _instance_cleanup(self, instance): raise NotImplementedError() # # Devices handling # def _init_devices(self): self._devices_supported = False self._assigned_devices = set() self._free_devices = set() def _get_device_objects(self, devices): """Override this in a subclass to transform a list of device names (e.g. ['sda']) to a list of pyudev.Device objects, if your plugin supports it""" return None def _get_matching_devices(self, instance, devices): if instance.devices_udev_regex is None: return set(self._device_matcher.match_list(instance.devices_expression, devices)) else: udev_devices = self._get_device_objects(devices) if udev_devices is None: log.error("Plugin '%s' does not support the 'devices_udev_regex' option", self.name) return set() udev_devices = self._device_matcher_udev.match_list(instance.devices_udev_regex, udev_devices) return set([x.sys_name for x in udev_devices]) def assign_free_devices(self, instance): if not self._devices_supported: return log.debug("assigning devices to instance %s" % instance.name) to_assign = self._get_matching_devices(instance, self._free_devices) instance.active = len(to_assign) > 0 if not instance.active: log.warn("instance %s: no matching devices available" % instance.name) else: name = instance.name if instance.name != self.name: name += " (%s)" % self.name log.info("instance %s: assigning devices %s" % (name, ", ".join(to_assign))) instance.assigned_devices.update(to_assign) # cannot use |= self._assigned_devices |= to_assign self._free_devices -= to_assign def release_devices(self, instance): if not self._devices_supported: return to_release = (instance.processed_devices \ | instance.assigned_devices) \ & self._assigned_devices instance.active = False instance.processed_devices.clear() instance.assigned_devices.clear() self._assigned_devices -= to_release self._free_devices |= to_release # # Tuning activation and deactivation. # def _run_for_each_device(self, instance, callback, devices): if not self._devices_supported: devices = [None, ] for device in devices: callback(instance, device) def _instance_pre_static(self, instance, enabling): pass def _instance_post_static(self, instance, enabling): pass def _call_device_script(self, instance, script, op, devices, full_rollback = False): if script is None: return None if len(devices) == 0: log.warn("Instance '%s': no device to call script '%s' for." % (instance.name, script)) return None if not script.startswith("/"): log.error("Relative paths cannot be used in script_pre or script_post. " \ + "Use ${i:PROFILE_DIR}.") return False dir_name = os.path.dirname(script) ret = True for dev in devices: environ = os.environ environ.update(self._variables.get_env()) arguments = [op] if full_rollback: arguments.append("full_rollback") arguments.append(dev) log.info("calling script '%s' with arguments '%s'" % (script, str(arguments))) log.debug("using environment '%s'" % str(list(environ.items()))) try: proc = Popen([script] + arguments, \ stdout=PIPE, stderr=PIPE, \ close_fds=True, env=environ, \ cwd = dir_name, universal_newlines = True) out, err = proc.communicate() if proc.returncode: log.error("script '%s' error: %d, '%s'" % (script, proc.returncode, err[:-1])) ret = False except (OSError,IOError) as e: log.error("script '%s' error: %s" % (script, e)) ret = False return ret def instance_apply_tuning(self, instance): """ Apply static and dynamic tuning if the plugin instance is active. """ if not instance.active: return if instance.has_static_tuning: self._call_device_script(instance, instance.script_pre, "apply", instance.assigned_devices) self._instance_pre_static(instance, True) self._instance_apply_static(instance) self._instance_post_static(instance, True) self._call_device_script(instance, instance.script_post, "apply", instance.assigned_devices) if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._run_for_each_device(instance, self._instance_apply_dynamic, instance.assigned_devices) instance.processed_devices.update(instance.assigned_devices) instance.assigned_devices.clear() def instance_verify_tuning(self, instance, ignore_missing): """ Verify static tuning if the plugin instance is active. """ if not instance.active: return None if len(instance.assigned_devices) != 0: log.error("BUG: Some devices have not been tuned: %s" % ", ".join(instance.assigned_devices)) devices = instance.processed_devices.copy() if instance.has_static_tuning: if self._call_device_script(instance, instance.script_pre, "verify", devices) == False: return False if self._instance_verify_static(instance, ignore_missing, devices) == False: return False if self._call_device_script(instance, instance.script_post, "verify", devices) == False: return False return True else: return None def instance_update_tuning(self, instance): """ Apply dynamic tuning if the plugin instance is active. """ if not instance.active: return if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._run_for_each_device(instance, self._instance_update_dynamic, instance.processed_devices.copy()) def instance_unapply_tuning(self, instance, full_rollback = False): """ Remove all tunings applied by the plugin instance. """ if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._run_for_each_device(instance, self._instance_unapply_dynamic, instance.processed_devices) if instance.has_static_tuning: self._call_device_script(instance, instance.script_post, "unapply", instance.processed_devices, full_rollback = full_rollback) self._instance_pre_static(instance, False) self._instance_unapply_static(instance, full_rollback) self._instance_post_static(instance, False) self._call_device_script(instance, instance.script_pre, "unapply", instance.processed_devices, full_rollback = full_rollback) def _instance_apply_static(self, instance): self._execute_all_non_device_commands(instance) self._execute_all_device_commands(instance, instance.assigned_devices) def _instance_verify_static(self, instance, ignore_missing, devices): ret = True if self._verify_all_non_device_commands(instance, ignore_missing) == False: ret = False if self._verify_all_device_commands(instance, devices, ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): self._cleanup_all_device_commands(instance, instance.processed_devices) self._cleanup_all_non_device_commands(instance) def _instance_apply_dynamic(self, instance, device): for option in [opt for opt in self._options_used_by_dynamic if self._storage_get(instance, self._commands[opt], device) is None]: self._check_and_save_value(instance, self._commands[option], device) self._instance_update_dynamic(instance, device) def _instance_unapply_dynamic(self, instance, device): raise NotImplementedError() def _instance_update_dynamic(self, instance, device): raise NotImplementedError() # # Registration of commands for static plugins. # def _init_commands(self): """ Initialize commands. """ self._commands = collections.OrderedDict() self._autoregister_commands() self._check_commands() def _autoregister_commands(self): """ Register all commands marked using @command_set, @command_get, and @command_custom decorators. """ for member_name in self.__class__.__dict__: if member_name.startswith("__"): continue member = getattr(self, member_name) if not hasattr(member, "_command"): continue command_name = member._command["name"] info = self._commands.get(command_name, {"name": command_name}) if "set" in member._command: info["custom"] = None info["set"] = member info["per_device"] = member._command["per_device"] info["priority"] = member._command["priority"] elif "get" in member._command: info["get"] = member elif "custom" in member._command: info["custom"] = member info["per_device"] = member._command["per_device"] info["priority"] = member._command["priority"] self._commands[command_name] = info # sort commands by priority self._commands = collections.OrderedDict(sorted(iter(self._commands.items()), key=lambda name_info: name_info[1]["priority"])) def _check_commands(self): """ Check if all commands are defined correctly. """ for command_name, command in list(self._commands.items()): # do not check custom commands if command.get("custom", False): continue # automatic commands should have 'get' and 'set' functions if "get" not in command or "set" not in command: raise TypeError("Plugin command '%s' is not defined correctly" % command_name) # # Operations with persistent storage for status data. # def _storage_key(self, instance_name = None, command_name = None, device_name = None): class_name = type(self).__name__ instance_name = "" if instance_name is None else instance_name command_name = "" if command_name is None else command_name device_name = "" if device_name is None else device_name return "%s/%s/%s/%s" % (class_name, instance_name, command_name, device_name) def _storage_set(self, instance, command, value, device_name=None): key = self._storage_key(instance.name, command["name"], device_name) self._storage.set(key, value) def _storage_get(self, instance, command, device_name=None): key = self._storage_key(instance.name, command["name"], device_name) return self._storage.get(key) def _storage_unset(self, instance, command, device_name=None): key = self._storage_key(instance.name, command["name"], device_name) return self._storage.unset(key) # # Command execution, verification, and cleanup. # def _execute_all_non_device_commands(self, instance): for command in [command for command in list(self._commands.values()) if not command["per_device"]]: new_value = self._variables.expand(instance.options.get(command["name"], None)) if new_value is not None: self._execute_non_device_command(instance, command, new_value) def _execute_all_device_commands(self, instance, devices): for command in [command for command in list(self._commands.values()) if command["per_device"]]: new_value = self._variables.expand(instance.options.get(command["name"], None)) if new_value is None: continue for device in devices: self._execute_device_command(instance, command, device, new_value) def _verify_all_non_device_commands(self, instance, ignore_missing): ret = True for command in [command for command in list(self._commands.values()) if not command["per_device"]]: new_value = self._variables.expand(instance.options.get(command["name"], None)) if new_value is not None: if self._verify_non_device_command(instance, command, new_value, ignore_missing) == False: ret = False return ret def _verify_all_device_commands(self, instance, devices, ignore_missing): ret = True for command in [command for command in list(self._commands.values()) if command["per_device"]]: new_value = instance.options.get(command["name"], None) if new_value is None: continue for device in devices: if self._verify_device_command(instance, command, device, new_value, ignore_missing) == False: ret = False return ret def _process_assignment_modifiers(self, new_value, current_value): if new_value is not None: nws = str(new_value) if len(nws) <= 1: return new_value op = nws[:1] val = nws[1:] if current_value is None: return val if op in ["<", ">"] else new_value try: if op == ">": if int(val) > int(current_value): return val else: return None elif op == "<": if int(val) < int(current_value): return val else: return None except ValueError: log.warn("cannot compare new value '%s' with current value '%s' by operator '%s', using '%s' directly as new value" % (val, current_value, op, new_value)) return new_value def _get_current_value(self, command, device = None, ignore_missing=False): if device is not None: return command["get"](device, ignore_missing=ignore_missing) else: return command["get"]() def _check_and_save_value(self, instance, command, device = None, new_value = None): current_value = self._get_current_value(command, device) new_value = self._process_assignment_modifiers(new_value, current_value) if new_value is not None and current_value is not None: self._storage_set(instance, command, current_value, device) return new_value def _execute_device_command(self, instance, command, device, new_value): if command["custom"] is not None: command["custom"](True, new_value, device, False, False) else: new_value = self._check_and_save_value(instance, command, device, new_value) if new_value is not None: command["set"](new_value, device, sim = False) def _execute_non_device_command(self, instance, command, new_value): if command["custom"] is not None: command["custom"](True, new_value, False, False) else: new_value = self._check_and_save_value(instance, command, None, new_value) if new_value is not None: command["set"](new_value, sim = False) def _norm_value(self, value): v = self._cmd.unquote(str(value)) if re.match(r'\s*(0+,?)+([\da-fA-F]*,?)*\s*$', v): return re.sub(r'^\s*(0+,?)+', "", v) return v def _verify_value(self, name, new_value, current_value, ignore_missing, device = None): if new_value is None: return None ret = False if current_value is None and ignore_missing: if device is None: log.info(consts.STR_VERIFY_PROFILE_VALUE_MISSING % name) else: log.info(consts.STR_VERIFY_PROFILE_DEVICE_VALUE_MISSING % (device, name)) return True if current_value is not None: current_value = self._norm_value(current_value) new_value = self._norm_value(new_value) try: ret = int(new_value) == int(current_value) except ValueError: try: ret = int(new_value, 16) == int(current_value, 16) except ValueError: ret = str(new_value) == str(current_value) if not ret: vals = str(new_value).split('|') for val in vals: val = val.strip() ret = val == current_value if ret: break self._log_verification_result(name, ret, new_value, current_value, device = device) return ret def _log_verification_result(self, name, success, new_value, current_value, device = None): if success: if device is None: log.info(consts.STR_VERIFY_PROFILE_VALUE_OK % (name, str(current_value).strip())) else: log.info(consts.STR_VERIFY_PROFILE_DEVICE_VALUE_OK % (device, name, str(current_value).strip())) return True else: if device is None: log.error(consts.STR_VERIFY_PROFILE_VALUE_FAIL % (name, str(current_value).strip(), str(new_value).strip())) else: log.error(consts.STR_VERIFY_PROFILE_DEVICE_VALUE_FAIL % (device, name, str(current_value).strip(), str(new_value).strip())) return False def _verify_device_command(self, instance, command, device, new_value, ignore_missing): if command["custom"] is not None: return command["custom"](True, new_value, device, True, ignore_missing) current_value = self._get_current_value(command, device, ignore_missing=ignore_missing) new_value = self._process_assignment_modifiers(new_value, current_value) if new_value is None: return None new_value = command["set"](new_value, device, True) return self._verify_value(command["name"], new_value, current_value, ignore_missing, device) def _verify_non_device_command(self, instance, command, new_value, ignore_missing): if command["custom"] is not None: return command["custom"](True, new_value, True, ignore_missing) current_value = self._get_current_value(command) new_value = self._process_assignment_modifiers(new_value, current_value) if new_value is None: return None new_value = command["set"](new_value, True) return self._verify_value(command["name"], new_value, current_value, ignore_missing) def _cleanup_all_non_device_commands(self, instance): for command in reversed([command for command in list(self._commands.values()) if not command["per_device"]]): if (instance.options.get(command["name"], None) is not None) or (command["name"] in self._options_used_by_dynamic): self._cleanup_non_device_command(instance, command) def _cleanup_all_device_commands(self, instance, devices): for command in reversed([command for command in list(self._commands.values()) if command["per_device"]]): if (instance.options.get(command["name"], None) is not None) or (command["name"] in self._options_used_by_dynamic): for device in devices: self._cleanup_device_command(instance, command, device) def _cleanup_device_command(self, instance, command, device): if command["custom"] is not None: command["custom"](False, None, device, False, False) else: old_value = self._storage_get(instance, command, device) if old_value is not None: command["set"](old_value, device, sim = False) self._storage_unset(instance, command, device) def _cleanup_non_device_command(self, instance, command): if command["custom"] is not None: command["custom"](False, None, False, False) else: old_value = self._storage_get(instance, command) if old_value is not None: command["set"](old_value, sim = False) self._storage_unset(instance, command) 07070100000141000081A40000000000000000000000016391BC3A000003D7000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/decorators.py__all__ = ["command_set", "command_get", "command_custom"] # @command_set("scheduler", per_device=True) # def set_scheduler(self, value, device): # set_new_scheduler # # @command_get("scheduler") # def get_scheduler(self, device): # return current_scheduler # # @command_set("foo") # def set_foo(self, value): # set_new_foo # # @command_get("foo") # def get_foo(self): # return current_foo # def command_set(name, per_device=False, priority=0): def wrapper(method): method._command = { "set": True, "name": name, "per_device": per_device, "priority": priority, } return method return wrapper def command_get(name): def wrapper(method): method._command = { "get": True, "name": name, } return method return wrapper def command_custom(name, per_device=False, priority=0): def wrapper(method): method._command = { "custom": True, "name": name, "per_device": per_device, "priority": priority, } return method return wrapper 07070100000142000081A40000000000000000000000016391BC3A00000063000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/exceptions.pyimport tuned.exceptions class NotSupportedPluginException(tuned.exceptions.TunedException): pass 07070100000143000081A40000000000000000000000016391BC3A00000BDD000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/hotplug.pyfrom . import base import tuned.consts as consts import tuned.logs log = tuned.logs.get() class Plugin(base.Plugin): """ Base class for plugins with device hotpluging support. """ def __init__(self, *args, **kwargs): super(Plugin, self).__init__(*args, **kwargs) def cleanup(self): super(Plugin, self).cleanup() self._hardware_events_cleanup() def _hardware_events_init(self): raise NotImplementedError() def _hardware_events_cleanup(self): raise NotImplementedError() def _init_devices(self): self._hardware_events_init() def _hardware_events_callback(self, event, device): if event == "add": log.info("device '%s' added" % device.sys_name) self._add_device(device) elif event == "remove": log.info("device '%s' removed" % device.sys_name) self._remove_device(device) def _add_device(self, device): device_name = device.sys_name if device_name in (self._assigned_devices | self._free_devices): return for instance_name, instance in list(self._instances.items()): if len(self._get_matching_devices(instance, [device_name])) == 1: log.info("instance %s: adding new device %s" % (instance_name, device_name)) self._assigned_devices.add(device_name) self._call_device_script(instance, instance.script_pre, "apply", [device_name]) self._added_device_apply_tuning(instance, device_name) self._call_device_script(instance, instance.script_post, "apply", [device_name]) instance.processed_devices.add(device_name) break else: log.debug("no instance wants %s" % device_name) self._free_devices.add(device_name) def _remove_device(self, device): device_name = device.sys_name if device_name not in (self._assigned_devices | self._free_devices): return for instance in list(self._instances.values()): if device_name in instance.processed_devices: self._call_device_script(instance, instance.script_post, "unapply", [device_name]) self._removed_device_unapply_tuning(instance, device_name) self._call_device_script(instance, instance.script_pre, "unapply", [device_name]) instance.processed_devices.remove(device_name) # This can be a bit racy (we can overcount), # but it shouldn't affect the boolean result instance.active = len(instance.processed_devices) \ + len(instance.assigned_devices) > 0 self._assigned_devices.remove(device_name) break else: self._free_devices.remove(device_name) def _added_device_apply_tuning(self, instance, device_name): self._execute_all_device_commands(instance, [device_name]) if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._instance_apply_dynamic(instance, device_name) def _removed_device_unapply_tuning(self, instance, device_name): if instance.has_dynamic_tuning and self._global_cfg.get(consts.CFG_DYNAMIC_TUNING, consts.CFG_DEF_DYNAMIC_TUNING): self._instance_unapply_dynamic(instance, device_name) self._cleanup_all_device_commands(instance, [device_name]) 07070100000144000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/instance07070100000145000081A40000000000000000000000016391BC3A0000003C000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/instance/__init__.pyfrom .instance import Instance from .factory import Factory 07070100000146000081A40000000000000000000000016391BC3A00000094000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/instance/factory.pyfrom .instance import Instance class Factory(object): def create(self, *args, **kwargs): instance = Instance(*args, **kwargs) return instance 07070100000147000081A40000000000000000000000016391BC3A0000078B000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/instance/instance.pyclass Instance(object): """ """ def __init__(self, plugin, name, devices_expression, devices_udev_regex, script_pre, script_post, options): self._plugin = plugin self._name = name self._devices_expression = devices_expression self._devices_udev_regex = devices_udev_regex self._script_pre = script_pre self._script_post = script_post self._options = options self._active = True self._has_static_tuning = False self._has_dynamic_tuning = False self._assigned_devices = set() self._processed_devices = set() # properties @property def plugin(self): return self._plugin @property def name(self): return self._name @property def active(self): """The instance performs some tuning (otherwise it is suspended).""" return self._active @active.setter def active(self, value): self._active = value @property def devices_expression(self): return self._devices_expression @property def assigned_devices(self): return self._assigned_devices @property def processed_devices(self): return self._processed_devices @property def devices_udev_regex(self): return self._devices_udev_regex @property def script_pre(self): return self._script_pre @property def script_post(self): return self._script_post @property def options(self): return self._options @property def has_static_tuning(self): return self._has_static_tuning @property def has_dynamic_tuning(self): return self._has_dynamic_tuning # methods def apply_tuning(self): self._plugin.instance_apply_tuning(self) def verify_tuning(self, ignore_missing): return self._plugin.instance_verify_tuning(self, ignore_missing) def update_tuning(self): self._plugin.instance_update_tuning(self) def unapply_tuning(self, full_rollback = False): self._plugin.instance_unapply_tuning(self, full_rollback) def destroy(self): self.unapply_tuning() self._plugin.destroy_instance(self) 07070100000148000081A40000000000000000000000016391BC3A00000BF8000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_audio.pyfrom . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import os import struct import glob log = tuned.logs.get() cmd = commands() class AudioPlugin(base.Plugin): """ `audio`:: Sets audio cards power saving options. The plug-in sets the auto suspend timeout for audio codecs to the value specified by the [option]`timeout` option. + Currently, the `snd_hda_intel` and `snd_ac97_codec` codecs are supported and the [option]`timeout` value is in seconds. To disable auto suspend for these codecs, set the [option]`timeout` value to `0`. To enforce the controller reset, set the option [option]`reset_controller` to `true`. Note that power management is supported per module. Hence, the kernel module names are used as device names. + .Set the timeout value to 10s and enforce the controller reset ==== ---- [audio] timeout=10 reset_controller=true ---- ==== """ def _init_devices(self): self._devices_supported = True self._assigned_devices = set() self._free_devices = set() for device in self._hardware_inventory.get_devices("sound").match_sys_name("card*"): module_name = self._device_module_name(device) if module_name in ["snd_hda_intel", "snd_ac97_codec"]: self._free_devices.add(module_name) def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass def _device_module_name(self, device): try: return device.parent.driver except: return None @classmethod def _get_config_options(cls): return { "timeout": 0, "reset_controller": False, } def _timeout_path(self, device): return "/sys/module/%s/parameters/power_save" % device def _reset_controller_path(self, device): return "/sys/module/%s/parameters/power_save_controller" % device @command_set("timeout", per_device = True) def _set_timeout(self, value, device, sim): try: timeout = int(value) except ValueError: log.error("timeout value '%s' is not integer" % value) return None if timeout >= 0: sys_file = self._timeout_path(device) if not sim: cmd.write_to_file(sys_file, "%d" % timeout) return timeout else: return None @command_get("timeout") def _get_timeout(self, device, ignore_missing=False): sys_file = self._timeout_path(device) value = cmd.read_file(sys_file, no_error=ignore_missing) if len(value) > 0: return value return None @command_set("reset_controller", per_device = True) def _set_reset_controller(self, value, device, sim): v = cmd.get_bool(value) sys_file = self._reset_controller_path(device) if os.path.exists(sys_file): if not sim: cmd.write_to_file(sys_file, v) return v return None @command_get("reset_controller") def _get_reset_controller(self, device, ignore_missing=False): sys_file = self._reset_controller_path(device) if os.path.exists(sys_file): value = cmd.read_file(sys_file) if len(value) > 0: return cmd.get_bool(value) return None 07070100000149000081A40000000000000000000000016391BC3A00006390000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_bootloader.pyfrom . import base from .decorators import * import tuned.logs from . import exceptions from tuned.utils.commands import commands import tuned.consts as consts import os import re import tempfile from time import sleep log = tuned.logs.get() class BootloaderPlugin(base.Plugin): """ `bootloader`:: Adds options to the kernel command line. This plug-in supports the GRUB 2 boot loader and the Boot Loader Specification (BLS). + NOTE: *TuneD* will not remove or replace kernel command line parameters added via other methods like *grubby*. *TuneD* will manage the kernel command line parameters added via *TuneD*. Please refer to your platform bootloader documentation about how to identify and manage kernel command line parameters set outside of *TuneD*. + Customized non-standard location of the GRUB 2 configuration file can be specified by the [option]`grub2_cfg_file` option. + The kernel options are added to the current GRUB configuration and its templates. Reboot the system for the kernel option to take effect. + Switching to another profile or manually stopping the `tuned` service removes the additional options. If you shut down or reboot the system, the kernel options persist in the [filename]`grub.cfg` file and grub environment files. + The kernel options can be specified by the following syntax: + [subs="+quotes,+macros"] ---- cmdline__suffix__=__arg1__ __arg2__ ... __argN__ ---- + Or with an alternative, but equivalent syntax: + [subs="+quotes,+macros"] ---- cmdline__suffix__=+__arg1__ __arg2__ ... __argN__ ---- + Where __suffix__ can be arbitrary (even empty) alphanumeric string which should be unique across all loaded profiles. It is recommended to use the profile name as the __suffix__ (for example, [option]`cmdline_my_profile`). If there are multiple [option]`cmdline` options with the same suffix, during the profile load/merge the value which was assigned previously will be used. This is the same behavior as any other plug-in options. The final kernel command line is constructed by concatenating all the resulting [option]`cmdline` options. + It is also possible to remove kernel options by the following syntax: + [subs="+quotes,+macros"] ---- cmdline__suffix__=-__arg1__ __arg2__ ... __argN__ ---- + Such kernel options will not be concatenated and thus removed during the final kernel command line construction. + .Modifying the kernel command line ==== For example, to add the [option]`quiet` kernel option to a *TuneD* profile, include the following lines in the [filename]`tuned.conf` file: ---- [bootloader] cmdline_my_profile=+quiet ---- An example of a custom profile `my_profile` that adds the [option]`isolcpus=2` option to the kernel command line: ---- [bootloader] cmdline_my_profile=isolcpus=2 ---- An example of a custom profile `my_profile` that removes the [option]`rhgb quiet` options from the kernel command line (if previously added by *TuneD*): ---- [bootloader] cmdline_my_profile=-rhgb quiet ---- ==== + .Modifying the kernel command line, example with inheritance ==== For example, to add the [option]`rhgb quiet` kernel options to a *TuneD* profile `profile_1`: ---- [bootloader] cmdline_profile_1=+rhgb quiet ---- In the child profile `profile_2` drop the [option]`quiet` option from the kernel command line: ---- [main] include=profile_1 [bootloader] cmdline_profile_2=-quiet ---- The final kernel command line will be [option]`rhgb`. In case the same [option]`cmdline` suffix as in the `profile_1` is used: ---- [main] include=profile_1 [bootloader] cmdline_profile_1=-quiet ---- It will result in the empty kernel command line because the merge executes and the [option]`cmdline_profile_1` gets redefined to just [option]`-quiet`. Thus there is nothing to remove in the final kernel command line processing. ==== + The [option]`initrd_add_img=IMAGE` adds an initrd overlay file `IMAGE`. If the `IMAGE` file name begins with '/', the absolute path is used. Otherwise, the current profile directory is used as the base directory for the `IMAGE`. + The [option]`initrd_add_dir=DIR` creates an initrd image from the directory `DIR` and adds the resulting image as an overlay. If the `DIR` directory name begins with '/', the absolute path is used. Otherwise, the current profile directory is used as the base directory for the `DIR`. + The [option]`initrd_dst_img=PATHNAME` sets the name and location of the resulting initrd image. Typically, it is not necessary to use this option. By default, the location of initrd images is `/boot` and the name of the image is taken as the basename of `IMAGE` or `DIR`. This can be overridden by setting [option]`initrd_dst_img`. + The [option]`initrd_remove_dir=VALUE` removes the source directory from which the initrd image was built if `VALUE` is true. Only 'y', 'yes', 't', 'true' and '1' (case insensitive) are accepted as true values for this option. Other values are interpreted as false. + .Adding an overlay initrd image ==== ---- [bootloader] initrd_remove_dir=True initrd_add_dir=/tmp/tuned-initrd.img ---- This creates an initrd image from the `/tmp/tuned-initrd.img` directory and and then removes the `tuned-initrd.img` directory from `/tmp`. ==== + The [option]`skip_grub_config=VALUE` does not change grub configuration if `VALUE` is true. However, [option]`cmdline` options are still processed, and the result is used to verify the current cmdline. Only 'y', 'yes', 't', 'true' and '1' (case insensitive) are accepted as true values for this option. Other values are interpreted as false. + .Do not change grub configuration ==== ---- [bootloader] skip_grub_config=True cmdline=+systemd.cpu_affinity=1 ---- ==== """ def __init__(self, *args, **kwargs): if not os.path.isfile(consts.GRUB2_TUNED_TEMPLATE_PATH): raise exceptions.NotSupportedPluginException("Required GRUB2 template not found, disabling plugin.") super(BootloaderPlugin, self).__init__(*args, **kwargs) self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True # controls grub2_cfg rewrites in _instance_post_static self.update_grub2_cfg = False self._skip_grub_config_val = False self._initrd_remove_dir = False self._initrd_dst_img_val = None self._cmdline_val = "" self._initrd_val = "" self._grub2_cfg_file_names = self._get_grub2_cfg_files() self._bls = self._bls_enabled() self._rpm_ostree = self._rpm_ostree_status() is not None def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "grub2_cfg_file": None, "initrd_dst_img": None, "initrd_add_img": None, "initrd_add_dir": None, "initrd_remove_dir": None, "cmdline": None, "skip_grub_config": None, } @staticmethod def _options_to_dict(options, omit=""): """ Returns dict created from options e.g.: _options_to_dict("A=A A=B A B=A C=A", "A=B B=A B=B") returns {'A': ['A', None], 'C': ['A']} """ d = {} omit = omit.split() for o in options.split(): if o not in omit: arr = o.split('=', 1) d.setdefault(arr[0], []).append(arr[1] if len(arr) > 1 else None) return d @staticmethod def _dict_to_options(d): return " ".join([k + "=" + v1 if v1 is not None else k for k, v in d.items() for v1 in v]) def _rpm_ostree_status(self): """ Returns status of rpm-ostree transactions or None if not run on rpm-ostree system """ (rc, out, err) = self._cmd.execute(['rpm-ostree', 'status'], return_err=True) log.debug("rpm-ostree status output stdout:\n%s\nstderr:\n%s" % (out, err)) if rc != 0: return None splited = out.split() if len(splited) < 2 or splited[0] != "State:": log.warn("Exceptional format of rpm-ostree status result:\n%s" % out) return None return splited[1] def _wait_till_idle(self): sleep_cycles = 10 sleep_secs = 1.0 for i in range(sleep_cycles): if self._rpm_ostree_status() == "idle": return True sleep(sleep_secs) if self._rpm_ostree_status() == "idle": return True return False def _rpm_ostree_kargs(self, append={}, delete={}): """ Method for appending or deleting rpm-ostree karg returns None if rpm-ostree not present or is run on not ostree system or tuple with new kargs, appended kargs and deleted kargs """ (rc, out, err) = self._cmd.execute(['rpm-ostree', 'kargs'], return_err=True) log.debug("rpm-ostree output stdout:\n%s\nstderr:\n%s" % (out, err)) if rc != 0: return None, None, None kargs = self._options_to_dict(out) if not self._wait_till_idle(): log.error("Cannot wait for transaction end") return None, None, None deleted = {} delete_params = self._dict_to_options(delete).split() # Deleting kargs, e.g. deleting added kargs by profile for k, val in delete.items(): for v in val: kargs[k].remove(v) deleted[k] = val appended = {} append_params = self._dict_to_options(append).split() # Appending kargs, e.g. new kargs by profile or restoring kargs replaced by profile for k, val in append.items(): if kargs.get(k): # If there is karg that we add with new value we want to delete it # and store old value for restoring after profile unload log.debug("adding rpm-ostree kargs %s: %s for delete" % (k, kargs[k])) deleted.setdefault(k, []).extend(kargs[k]) delete_params.extend([k + "=" + v if v is not None else k for v in kargs[k]]) kargs[k] = [] kargs.setdefault(k, []).extend(val) appended[k] = val log.info("rpm-ostree kargs - appending: '%s'; deleting: '%s'" % (append_params, delete_params)) (rc, _, err) = self._cmd.execute(['rpm-ostree', 'kargs'] + ['--append=%s' % v for v in append_params] + ['--delete=%s' % v for v in delete_params], return_err=True) if rc != 0: log.error("Something went wrong with rpm-ostree kargs\n%s" % (err)) return self._options_to_dict(out), None, None else: return kargs, appended, deleted def _get_effective_options(self, options): """Merge provided options with plugin default options and merge all cmdline.* options.""" effective = self._get_config_options().copy() cmdline_keys = [] for key in options: if str(key).startswith("cmdline"): cmdline_keys.append(key) elif key in effective: effective[key] = options[key] else: log.warn("Unknown option '%s' for plugin '%s'." % (key, self.__class__.__name__)) cmdline = "" for key in cmdline_keys: val = options[key] if val is None or val == "": continue op = val[0] op1 = val[1:2] vals = val[1:].strip() if op == "+" or (op == "\\" and op1 in ["\\", "+", "-"]): if vals != "": cmdline += " " + vals elif op == "-": if vals != "": for p in vals.split(): regex = re.escape(p) cmdline = re.sub(r"(\A|\s)" + regex + r"(?=\Z|\s)", r"", cmdline) else: cmdline += " " + val cmdline = cmdline.strip() if cmdline != "": effective["cmdline"] = cmdline return effective def _get_grub2_cfg_files(self): cfg_files = [] for f in consts.GRUB2_CFG_FILES: if os.path.exists(f): cfg_files.append(f) return cfg_files def _bls_enabled(self): grub2_default_env = self._cmd.read_file(consts.GRUB2_DEFAULT_ENV_FILE, no_error = True) if len(grub2_default_env) <= 0: log.info("cannot read '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) return False return re.search(r"^\s*GRUB_ENABLE_BLSCFG\s*=\s*\"?\s*[tT][rR][uU][eE]\s*\"?\s*$", grub2_default_env, flags = re.MULTILINE) is not None def _patch_bootcmdline(self, d): return self._cmd.add_modify_option_in_file(consts.BOOT_CMDLINE_FILE, d) def _remove_grub2_tuning(self): self._patch_bootcmdline({consts.BOOT_CMDLINE_TUNED_VAR : "", consts.BOOT_CMDLINE_INITRD_ADD_VAR : ""}) if not self._grub2_cfg_file_names: log.info("cannot find grub.cfg to patch") return for f in self._grub2_cfg_file_names: self._cmd.add_modify_option_in_file(f, {"set\s+" + consts.GRUB2_TUNED_VAR : "", "set\s+" + consts.GRUB2_TUNED_INITRD_VAR : ""}, add = False) if self._initrd_dst_img_val is not None: log.info("removing initrd image '%s'" % self._initrd_dst_img_val) self._cmd.unlink(self._initrd_dst_img_val) def _get_rpm_ostree_changes(self): f = self._cmd.read_file(consts.BOOT_CMDLINE_FILE) appended = re.search(consts.BOOT_CMDLINE_TUNED_VAR + r"=\"(.*)\"", f, flags=re.MULTILINE) appended = appended[1] if appended else "" deleted = re.search(consts.BOOT_CMDLINE_KARGS_DELETED_VAR + r"=\"(.*)\"", f, flags=re.MULTILINE) deleted = deleted[1] if deleted else "" return appended, deleted def _remove_rpm_ostree_tuning(self): appended, deleted = self._get_rpm_ostree_changes() self._rpm_ostree_kargs(append=self._options_to_dict(deleted), delete=self._options_to_dict(appended)) self._patch_bootcmdline({consts.BOOT_CMDLINE_TUNED_VAR: "", consts.BOOT_CMDLINE_KARGS_DELETED_VAR: ""}) def _instance_unapply_static(self, instance, full_rollback = False): if full_rollback and not self._skip_grub_config_val: if self._rpm_ostree: log.info("removing rpm-ostree tuning previously added by Tuned") self._remove_rpm_ostree_tuning() else: log.info("removing grub2 tuning previously added by Tuned") self._remove_grub2_tuning() self._update_grubenv({"tuned_params" : "", "tuned_initrd" : ""}) def _grub2_cfg_unpatch(self, grub2_cfg): log.debug("unpatching grub.cfg") cfg = re.sub(r"^\s*set\s+" + consts.GRUB2_TUNED_VAR + "\s*=.*\n", "", grub2_cfg, flags = re.MULTILINE) grub2_cfg = re.sub(r" *\$" + consts.GRUB2_TUNED_VAR, "", cfg, flags = re.MULTILINE) cfg = re.sub(r"^\s*set\s+" + consts.GRUB2_TUNED_INITRD_VAR + "\s*=.*\n", "", grub2_cfg, flags = re.MULTILINE) grub2_cfg = re.sub(r" *\$" + consts.GRUB2_TUNED_INITRD_VAR, "", cfg, flags = re.MULTILINE) cfg = re.sub(consts.GRUB2_TEMPLATE_HEADER_BEGIN + r"\n", "", grub2_cfg, flags = re.MULTILINE) return re.sub(consts.GRUB2_TEMPLATE_HEADER_END + r"\n+", "", cfg, flags = re.MULTILINE) def _grub2_cfg_patch_initial(self, grub2_cfg, d): log.debug("initial patching of grub.cfg") s = r"\1\n\n" + consts.GRUB2_TEMPLATE_HEADER_BEGIN + "\n" for opt in d: s += r"set " + self._cmd.escape(opt) + "=\"" + self._cmd.escape(d[opt]) + "\"\n" s += consts.GRUB2_TEMPLATE_HEADER_END + r"\n" grub2_cfg = re.sub(r"^(\s*###\s+END\s+[^#]+/00_header\s+### *)\n", s, grub2_cfg, flags = re.MULTILINE) d2 = {"linux" : consts.GRUB2_TUNED_VAR, "initrd" : consts.GRUB2_TUNED_INITRD_VAR} for i in d2: # add TuneD parameters to all kernels grub2_cfg = re.sub(r"^(\s*" + i + r"(16|efi)?\s+.*)$", r"\1 $" + d2[i], grub2_cfg, flags = re.MULTILINE) # remove TuneD parameters from rescue kernels grub2_cfg = re.sub(r"^(\s*" + i + r"(?:16|efi)?\s+\S+rescue.*)\$" + d2[i] + r" *(.*)$", r"\1\2", grub2_cfg, flags = re.MULTILINE) # fix whitespaces in rescue kernels grub2_cfg = re.sub(r"^(\s*" + i + r"(?:16|efi)?\s+\S+rescue.*) +$", r"\1", grub2_cfg, flags = re.MULTILINE) return grub2_cfg def _grub2_default_env_patch(self): grub2_default_env = self._cmd.read_file(consts.GRUB2_DEFAULT_ENV_FILE) if len(grub2_default_env) <= 0: log.info("cannot read '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) return False d = {"GRUB_CMDLINE_LINUX_DEFAULT" : consts.GRUB2_TUNED_VAR, "GRUB_INITRD_OVERLAY" : consts.GRUB2_TUNED_INITRD_VAR} write = False for i in d: if re.search(r"^[^#]*\b" + i + r"\s*=.*\\\$" + d[i] + r"\b.*$", grub2_default_env, flags = re.MULTILINE) is None: write = True if grub2_default_env[-1] != "\n": grub2_default_env += "\n" grub2_default_env += i + "=\"${" + i + ":+$" + i + r" }\$" + d[i] + "\"\n" if write: log.debug("patching '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) self._cmd.write_to_file(consts.GRUB2_DEFAULT_ENV_FILE, grub2_default_env) return True def _grub2_default_env_unpatch(self): grub2_default_env = self._cmd.read_file(consts.GRUB2_DEFAULT_ENV_FILE) if len(grub2_default_env) <= 0: log.info("cannot read '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) return False write = False if re.search(r"^GRUB_CMDLINE_LINUX_DEFAULT=\"\$\{GRUB_CMDLINE_LINUX_DEFAULT:\+\$GRUB_CMDLINE_LINUX_DEFAULT \}\\\$" + consts.GRUB2_TUNED_VAR + "\"$", grub2_default_env, flags = re.MULTILINE): write = True cfg = re.sub(r"^GRUB_CMDLINE_LINUX_DEFAULT=\"\$\{GRUB_CMDLINE_LINUX_DEFAULT:\+\$GRUB_CMDLINE_LINUX_DEFAULT \}\\\$" + consts.GRUB2_TUNED_VAR + "\"$\n", "", grub2_default_env, flags = re.MULTILINE) if cfg[-1] != "\n": cfg += "\n" if write: log.debug("unpatching '%s'" % consts.GRUB2_DEFAULT_ENV_FILE) self._cmd.write_to_file(consts.GRUB2_DEFAULT_ENV_FILE, cfg) return True def _grub2_cfg_patch(self, d): log.debug("patching grub.cfg") if not self._grub2_cfg_file_names: log.info("cannot find grub.cfg to patch") return False for f in self._grub2_cfg_file_names: grub2_cfg = self._cmd.read_file(f) if len(grub2_cfg) <= 0: log.info("cannot patch %s" % f) continue log.debug("adding boot command line parameters to '%s'" % f) grub2_cfg_new = grub2_cfg patch_initial = False for opt in d: (grub2_cfg_new, nsubs) = re.subn(r"\b(set\s+" + opt + "\s*=).*$", r"\1" + "\"" + self._cmd.escape(d[opt]) + "\"", grub2_cfg_new, flags = re.MULTILINE) if nsubs < 1 or re.search(r"\$" + opt, grub2_cfg, flags = re.MULTILINE) is None: patch_initial = True # workaround for rhbz#1442117 if len(re.findall(r"\$" + consts.GRUB2_TUNED_VAR, grub2_cfg, flags = re.MULTILINE)) != \ len(re.findall(r"\$" + consts.GRUB2_TUNED_INITRD_VAR, grub2_cfg, flags = re.MULTILINE)): patch_initial = True if patch_initial: grub2_cfg_new = self._grub2_cfg_patch_initial(self._grub2_cfg_unpatch(grub2_cfg), d) self._cmd.write_to_file(f, grub2_cfg_new) if self._bls: self._grub2_default_env_unpatch() else: self._grub2_default_env_patch() return True def _rpm_ostree_update(self): appended, _ = self._get_rpm_ostree_changes() _cmdline_dict = self._options_to_dict(self._cmdline_val, appended) if not _cmdline_dict: return None (_, _, d) = self._rpm_ostree_kargs(append=_cmdline_dict) if d is None: return self._patch_bootcmdline({consts.BOOT_CMDLINE_TUNED_VAR : self._cmdline_val, consts.BOOT_CMDLINE_KARGS_DELETED_VAR : self._dict_to_options(d)}) def _grub2_update(self): self._grub2_cfg_patch({consts.GRUB2_TUNED_VAR : self._cmdline_val, consts.GRUB2_TUNED_INITRD_VAR : self._initrd_val}) self._patch_bootcmdline({consts.BOOT_CMDLINE_TUNED_VAR : self._cmdline_val, consts.BOOT_CMDLINE_INITRD_ADD_VAR : self._initrd_val}) def _has_bls(self): return os.path.exists(consts.BLS_ENTRIES_PATH) def _update_grubenv(self, d): log.debug("updating grubenv, setting %s" % str(d)) l = ["%s=%s" % (str(option), str(value)) for option, value in d.items()] (rc, out) = self._cmd.execute(["grub2-editenv", "-", "set"] + l) if rc != 0: log.warn("cannot update grubenv: '%s'" % out) return False return True def _bls_entries_patch_initial(self): machine_id = self._cmd.get_machine_id() if machine_id == "": return False log.debug("running kernel update hook '%s' to patch BLS entries" % consts.KERNEL_UPDATE_HOOK_FILE) (rc, out) = self._cmd.execute([consts.KERNEL_UPDATE_HOOK_FILE, "add"], env = {"KERNEL_INSTALL_MACHINE_ID" : machine_id}) if rc != 0: log.warn("cannot patch BLS entries: '%s'" % out) return False return True def _bls_update(self): log.debug("updating BLS") if self._has_bls() and \ self._update_grubenv({"tuned_params" : self._cmdline_val, "tuned_initrd" : self._initrd_val}) and \ self._bls_entries_patch_initial(): return True return False def _init_initrd_dst_img(self, name): if self._initrd_dst_img_val is None: self._initrd_dst_img_val = os.path.join(consts.BOOT_DIR, os.path.basename(name)) def _check_petitboot(self): return os.path.isdir(consts.PETITBOOT_DETECT_DIR) def _install_initrd(self, img): if self._rpm_ostree: log.warn("Detected rpm-ostree which doesn't support initrd overlays.") return False if self._check_petitboot(): log.warn("Detected Petitboot which doesn't support initrd overlays. The initrd overlay will be ignored by bootloader.") log.info("installing initrd image as '%s'" % self._initrd_dst_img_val) img_name = os.path.basename(self._initrd_dst_img_val) if not self._cmd.copy(img, self._initrd_dst_img_val): return False self.update_grub2_cfg = True curr_cmdline = self._cmd.read_file("/proc/cmdline").rstrip() initrd_grubpath = "/" lc = len(curr_cmdline) if lc: path = re.sub(r"^\s*BOOT_IMAGE=\s*(?:\([^)]*\))?(\S*/).*$", "\\1", curr_cmdline) if len(path) < lc: initrd_grubpath = path self._initrd_val = os.path.join(initrd_grubpath, img_name) return True @command_custom("grub2_cfg_file") def _grub2_cfg_file(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: self._grub2_cfg_file_names = [str(value)] @command_custom("initrd_dst_img") def _initrd_dst_img(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: self._initrd_dst_img_val = str(value) if self._initrd_dst_img_val == "": return False if self._initrd_dst_img_val[0] != "/": self._initrd_dst_img_val = os.path.join(consts.BOOT_DIR, self._initrd_dst_img_val) @command_custom("initrd_remove_dir") def _initrd_remove_dir(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: self._initrd_remove_dir = self._cmd.get_bool(value) == "1" @command_custom("initrd_add_img", per_device = False, priority = 10) def _initrd_add_img(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: src_img = str(value) self._init_initrd_dst_img(src_img) if src_img == "": return False if not self._install_initrd(src_img): return False @command_custom("initrd_add_dir", per_device = False, priority = 10) def _initrd_add_dir(self, enabling, value, verify, ignore_missing): # nothing to verify if verify: return None if enabling and value is not None: src_dir = str(value) self._init_initrd_dst_img(src_dir) if src_dir == "": return False if not os.path.isdir(src_dir): log.error("error: cannot create initrd image, source directory '%s' doesn't exist" % src_dir) return False log.info("generating initrd image from directory '%s'" % src_dir) (fd, tmpfile) = tempfile.mkstemp(prefix = "tuned-bootloader-", suffix = ".tmp") log.debug("writing initrd image to temporary file '%s'" % tmpfile) os.close(fd) (rc, out) = self._cmd.execute("find . | cpio -co > %s" % tmpfile, cwd = src_dir, shell = True) log.debug("cpio log: %s" % out) if rc != 0: log.error("error generating initrd image") self._cmd.unlink(tmpfile, no_error = True) return False self._install_initrd(tmpfile) self._cmd.unlink(tmpfile) if self._initrd_remove_dir: log.info("removing directory '%s'" % src_dir) self._cmd.rmtree(src_dir) @command_custom("cmdline", per_device = False, priority = 10) def _cmdline(self, enabling, value, verify, ignore_missing): v = self._variables.expand(self._cmd.unquote(value)) if verify: if self._rpm_ostree: rpm_ostree_kargs = self._rpm_ostree_kargs()[0] cmdline = self._dict_to_options(rpm_ostree_kargs) else: cmdline = self._cmd.read_file("/proc/cmdline") if len(cmdline) == 0: return None cmdline_set = set(cmdline.split()) value_set = set(v.split()) missing_set = value_set - cmdline_set if len(missing_set) == 0: log.info(consts.STR_VERIFY_PROFILE_VALUE_OK % ("cmdline", str(value_set))) return True else: cmdline_dict = {v.split("=", 1)[0]: v for v in cmdline_set} for m in missing_set: arg = m.split("=", 1)[0] if not arg in cmdline_dict: log.error(consts.STR_VERIFY_PROFILE_CMDLINE_FAIL_MISSING % (arg, m)) else: log.error(consts.STR_VERIFY_PROFILE_CMDLINE_FAIL % (cmdline_dict[arg], m)) present_set = value_set & cmdline_set log.info("expected arguments that are present in cmdline: %s"%(" ".join(present_set),)) return False if enabling and value is not None: log.info("installing additional boot command line parameters to grub2") self.update_grub2_cfg = True self._cmdline_val = v @command_custom("skip_grub_config", per_device = False, priority = 10) def _skip_grub_config(self, enabling, value, verify, ignore_missing): if verify: return None if enabling and value is not None: if self._cmd.get_bool(value) == "1": log.info("skipping any modification of grub config") self._skip_grub_config_val = True def _instance_post_static(self, instance, enabling): if enabling and self._skip_grub_config_val: if len(self._initrd_val) > 0: log.warn("requested changes to initrd will not be applied!") if len(self._cmdline_val) > 0: log.warn("requested changes to cmdline will not be applied!") elif enabling and self.update_grub2_cfg: if self._rpm_ostree: self._rpm_ostree_update() else: self._grub2_update() self._bls_update() self.update_grub2_cfg = False 0707010000014A000081A40000000000000000000000016391BC3A00005265000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_cpu.pyfrom . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import tuned.consts as consts import os import struct import errno import platform import procfs log = tuned.logs.get() cpuidle_states_path = "/sys/devices/system/cpu/cpu0/cpuidle" class CPULatencyPlugin(base.Plugin): """ `cpu`:: Sets the CPU governor to the value specified by the [option]`governor` option and dynamically changes the Power Management Quality of Service (PM QoS) CPU Direct Memory Access (DMA) latency according to the CPU load. `governor`::: The [option]`governor` option of the 'cpu' plug-in supports specifying CPU governors. Multiple governors are separated using '|'. The '|' character is meant to represent a logical 'or' operator. Note that the same syntax is used for the [option]`energy_perf_bias` option. *TuneD* will set the first governor that is available on the system. + For example, with the following profile, *TuneD* will set the 'ondemand' governor, if it is available. If it is not available, but the 'powersave' governor is available, 'powersave' will be set. If neither of them are available, the governor will not be changed. + .Specifying a CPU governor ==== ---- [cpu] governor=ondemand|powersave ---- ==== `sampling_down_factor`::: The sampling rate determines how frequently the governor checks to tune the CPU. The [option]`sampling_down_factor` is a tunable that multiplies the sampling rate when the CPU is at its highest clock frequency thereby delaying load evaluation and improving performance. Allowed values for sampling_down_factor are 1 to 100000. + .The recommended setting for jitter reduction ==== ---- [cpu] sampling_down_factor = 100 ---- ==== `energy_perf_bias`::: [option]`energy_perf_bias` supports managing energy vs. performance policy via x86 Model Specific Registers using the `x86_energy_perf_policy` tool. Multiple alternative Energy Performance Bias (EPB) values are supported. The alternative values are separated using the '|' character. The following EPB values are supported starting with kernel 4.13: "performance", "balance-performance", "normal", "balance-power" and "power". + .Specifying alternative Energy Performance Bias values ==== ---- [cpu] energy_perf_bias=powersave|power ---- *TuneD* will try to set EPB to 'powersave'. If that fails, it will try to set it to 'power'. ==== `latency_low, latency_high, load_threshold`::: + If the CPU load is lower than the value specified by the[option]`load_threshold` option, the latency is set to the value specified either by the [option]`latency_high` option or by the [option]`latency_low` option. + `force_latency`::: You can also force the latency to a specific value and prevent it from dynamically changing further. To do so, set the [option]`force_latency` option to the required latency value. + The maximum latency value can be specified in several ways: + * by a numerical value in microseconds (for example, `force_latency=10`) * as the kernel CPU idle level ID of the maximum C-state allowed (for example, force_latency = cstate.id:1) * as a case sensitive name of the maximum C-state allowed (for example, force_latency = cstate.name:C1) * by using 'None' as a fallback value to prevent errors when alternative C-state IDs/names do not exist. When 'None' is used in the alternatives pipeline, all the alternatives that follow 'None' are ignored. + It is also possible to specify multiple fallback values separated by '|' as the C-state names and/or IDs may not be available on some systems. + .Specifying fallback C-state values ==== ---- [cpu] force_latency=cstate.name:C6|cstate.id:4|10 ---- This configuration tries to obtain and set the latency of C-state named C6. If the C-state C6 does not exist, kernel CPU idle level ID 4 (state4) latency is searched for in sysfs. Finally, if the state4 directory in sysfs is not found, the last latency fallback value is `10` us. The value is encoded and written into the kernel's PM QoS file `/dev/cpu_dma_latency`. ==== + .Specifying fallback C-state values using 'None'. ==== ---- [cpu] force_latency=cstate.name:XYZ|None ---- In this case, if C-state with the name `XYZ` does not exist [option]`force_latency`, no latency value will be written into the kernel's PM QoS file, and no errors will be reported due to the presence of 'None'. ==== `min_perf_pct, max_perf_pct, no_turbo`::: These options set the internals of the Intel P-State driver exposed via the kernel's `sysfs` interface. + .Adjusting the configuration of the Intel P-State driver ==== ---- [cpu] min_perf_pct=100 ---- Limit the minimum P-State that will be requested by the driver. It states it as a percentage of the max (non-turbo) performance level. ==== """ def __init__(self, *args, **kwargs): super(CPULatencyPlugin, self).__init__(*args, **kwargs) self._has_pm_qos = True self._arch = "x86_64" self._is_x86 = False self._is_intel = False self._is_amd = False self._has_energy_perf_bias = False self._has_intel_pstate = False self._min_perf_pct_save = None self._max_perf_pct_save = None self._no_turbo_save = None self._governors_map = {} self._cmd = commands() def _init_devices(self): self._devices_supported = True self._free_devices = set() # current list of devices for device in self._hardware_inventory.get_devices("cpu"): self._free_devices.add(device.sys_name) self._assigned_devices = set() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("cpu", x) for x in devices] @classmethod def _get_config_options(self): return { "load_threshold" : 0.2, "latency_low" : 100, "latency_high" : 1000, "force_latency" : None, "governor" : None, "sampling_down_factor" : None, "energy_perf_bias" : None, "min_perf_pct" : None, "max_perf_pct" : None, "no_turbo" : None, } def _check_arch(self): intel_archs = [ "x86_64", "i686", "i585", "i486", "i386" ] self._arch = platform.machine() if self._arch in intel_archs: # Possible other x86 vendors (from arch/x86/kernel/cpu/*): # "CentaurHauls", "CyrixInstead", "Geode by NSC", "HygonGenuine", "GenuineTMx86", # "TransmetaCPU", "UMC UMC UMC" cpu = procfs.cpuinfo() vendor = cpu.tags.get("vendor_id") if vendor == "GenuineIntel": self._is_intel = True elif vendor == "AuthenticAMD" or vendor == "HygonGenuine": self._is_amd = True else: # We always assign Intel, unless we know better self._is_intel = True log.info("We are running on an x86 %s platform" % vendor) else: log.info("We are running on %s (non x86)" % self._arch) if self._is_intel is True: # Check for x86_energy_perf_policy, ignore if not available / supported self._check_energy_perf_bias() # Check for intel_pstate self._check_intel_pstate() def _check_energy_perf_bias(self): self._has_energy_perf_bias = False retcode_unsupported = 1 retcode, out = self._cmd.execute(["x86_energy_perf_policy", "-r"], no_errors = [errno.ENOENT, retcode_unsupported]) # With recent versions of the tool, a zero exit code is # returned even if EPB is not supported. The output is empty # in that case, however. if retcode == 0 and out != "": self._has_energy_perf_bias = True elif retcode < 0: log.warning("unable to run x86_energy_perf_policy tool, ignoring CPU energy performance bias, is the tool installed?") else: log.warning("your CPU doesn't support MSR_IA32_ENERGY_PERF_BIAS, ignoring CPU energy performance bias") def _check_intel_pstate(self): self._has_intel_pstate = os.path.exists("/sys/devices/system/cpu/intel_pstate") if self._has_intel_pstate: log.info("intel_pstate detected") def _is_cpu_online(self, device): sd = str(device) return self._cmd.is_cpu_online(str(device).replace("cpu", "")) def _cpu_has_scaling_governor(self, device): return os.path.exists("/sys/devices/system/cpu/%s/cpufreq/scaling_governor" % device) def _check_cpu_can_change_governor(self, device): if not self._is_cpu_online(device): log.debug("'%s' is not online, skipping" % device) return False if not self._cpu_has_scaling_governor(device): log.debug("there is no scaling governor fo '%s', skipping" % device) return False return True def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False # only the first instance of the plugin can control the latency if list(self._instances.values())[0] == instance: instance._first_instance = True try: self._cpu_latency_fd = os.open(consts.PATH_CPU_DMA_LATENCY, os.O_WRONLY) except OSError: log.error("Unable to open '%s', disabling PM_QoS control" % consts.PATH_CPU_DMA_LATENCY) self._has_pm_qos = False self._latency = None if instance.options["force_latency"] is None: instance._load_monitor = self._monitors_repository.create("load", None) instance._has_dynamic_tuning = True else: instance._load_monitor = None self._check_arch() else: instance._first_instance = False log.info("Latency settings from non-first CPU plugin instance '%s' will be ignored." % instance.name) try: instance._first_device = list(instance.assigned_devices)[0] except IndexError: instance._first_device = None def _instance_cleanup(self, instance): if instance._first_instance: if self._has_pm_qos: os.close(self._cpu_latency_fd) if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) def _get_intel_pstate_attr(self, attr): return self._cmd.read_file("/sys/devices/system/cpu/intel_pstate/%s" % attr, None).strip() def _set_intel_pstate_attr(self, attr, val): if val is not None: self._cmd.write_to_file("/sys/devices/system/cpu/intel_pstate/%s" % attr, val) def _getset_intel_pstate_attr(self, attr, value): if value is None: return None v = self._get_intel_pstate_attr(attr) self._set_intel_pstate_attr(attr, value) return v def _instance_apply_static(self, instance): super(CPULatencyPlugin, self)._instance_apply_static(instance) if not instance._first_instance: return force_latency_value = self._variables.expand( instance.options["force_latency"]) if force_latency_value is not None: self._set_latency(force_latency_value) if self._has_intel_pstate: new_value = self._variables.expand( instance.options["min_perf_pct"]) self._min_perf_pct_save = self._getset_intel_pstate_attr( "min_perf_pct", new_value) new_value = self._variables.expand( instance.options["max_perf_pct"]) self._max_perf_pct_save = self._getset_intel_pstate_attr( "max_perf_pct", new_value) new_value = self._variables.expand( instance.options["no_turbo"]) self._no_turbo_save = self._getset_intel_pstate_attr( "no_turbo", new_value) def _instance_unapply_static(self, instance, full_rollback = False): super(CPULatencyPlugin, self)._instance_unapply_static(instance, full_rollback) if instance._first_instance and self._has_intel_pstate: self._set_intel_pstate_attr("min_perf_pct", self._min_perf_pct_save) self._set_intel_pstate_attr("max_perf_pct", self._max_perf_pct_save) self._set_intel_pstate_attr("no_turbo", self._no_turbo_save) def _instance_apply_dynamic(self, instance, device): self._instance_update_dynamic(instance, device) def _instance_update_dynamic(self, instance, device): assert(instance._first_instance) if device != instance._first_device: return load = instance._load_monitor.get_load()["system"] if load < instance.options["load_threshold"]: self._set_latency(instance.options["latency_high"]) else: self._set_latency(instance.options["latency_low"]) def _instance_unapply_dynamic(self, instance, device): pass def _str2int(self, s): try: return int(s) except (ValueError, TypeError): return None def _read_cstates_latency(self): self.cstates_latency = {} for d in os.listdir(cpuidle_states_path): cstate_path = cpuidle_states_path + "/%s/" % d name = self._cmd.read_file(cstate_path + "name", err_ret = None, no_error = True) latency = self._cmd.read_file(cstate_path + "latency", err_ret = None, no_error = True) if name is not None and latency is not None: latency = self._str2int(latency) if latency is not None: self.cstates_latency[name.strip()] = latency def _get_latency_by_cstate_name(self, name, no_zero=False): log.debug("getting latency for cstate with name '%s'" % name) if self.cstates_latency is None: log.debug("reading cstates latency table") self._read_cstates_latency() latency = self.cstates_latency.get(name, None) if no_zero and latency == 0: log.debug("skipping latency 0 as set by param") return None log.debug("cstate name mapped to latency: %s" % str(latency)) return latency def _get_latency_by_cstate_id(self, lid, no_zero=False): log.debug("getting latency for cstate with ID '%s'" % str(lid)) lid = self._str2int(lid) if lid is None: log.debug("cstate ID is invalid") return None latency_path = cpuidle_states_path + "/%s/latency" % ("state%d" % lid) latency = self._str2int(self._cmd.read_file(latency_path, err_ret = None, no_error = True)) if no_zero and latency == 0: log.debug("skipping latency 0 as set by param") return None log.debug("cstate ID mapped to latency: %s" % str(latency)) return latency # returns (latency, skip), skip means we want to skip latency settings def _parse_latency(self, latency): self.cstates_latency = None latencies = str(latency).split("|") log.debug("parsing latency") for latency in latencies: try: latency = int(latency) log.debug("parsed directly specified latency value: %d" % latency) except ValueError: if latency[0:18] == "cstate.id_no_zero:": latency = self._get_latency_by_cstate_id(latency[18:], no_zero=True) elif latency[0:10] == "cstate.id:": latency = self._get_latency_by_cstate_id(latency[10:]) elif latency[0:20] == "cstate.name_no_zero:": latency = self._get_latency_by_cstate_name(latency[20:], no_zero=True) elif latency[0:12] == "cstate.name:": latency = self._get_latency_by_cstate_name(latency[12:]) elif latency in ["none", "None"]: log.debug("latency 'none' specified") return None, True else: latency = None log.debug("invalid latency specified: '%s'" % str(latency)) if latency is not None: break return latency, False def _set_latency(self, latency): latency, skip = self._parse_latency(latency) if not skip and self._has_pm_qos: if latency is None: log.error("unable to evaluate latency value (probably wrong settings in the 'cpu' section of current profile), disabling PM QoS") self._has_pm_qos = False elif self._latency != latency: log.info("setting new cpu latency %d" % latency) latency_bin = struct.pack("i", latency) os.write(self._cpu_latency_fd, latency_bin) self._latency = latency def _get_available_governors(self, device): return self._cmd.read_file("/sys/devices/system/cpu/%s/cpufreq/scaling_available_governors" % device).strip().split() @command_set("governor", per_device=True) def _set_governor(self, governors, device, sim): if not self._check_cpu_can_change_governor(device): return None governors = str(governors) governors = governors.split("|") governors = [governor.strip() for governor in governors] for governor in governors: if len(governor) == 0: log.error("The 'governor' option contains an empty value.") return None available_governors = self._get_available_governors(device) for governor in governors: if governor in available_governors: if not sim: log.info("setting governor '%s' on cpu '%s'" % (governor, device)) self._cmd.write_to_file("/sys/devices/system/cpu/%s/cpufreq/scaling_governor" % device, governor) break elif not sim: log.debug("Ignoring governor '%s' on cpu '%s', it is not supported" % (governor, device)) else: log.warn("None of the scaling governors is supported: %s" % ", ".join(governors)) governor = None return governor @command_get("governor") def _get_governor(self, device, ignore_missing=False): governor = None if not self._check_cpu_can_change_governor(device): return None data = self._cmd.read_file("/sys/devices/system/cpu/%s/cpufreq/scaling_governor" % device, no_error=ignore_missing).strip() if len(data) > 0: governor = data if governor is None: log.error("could not get current governor on cpu '%s'" % device) return governor def _sampling_down_factor_path(self, governor = "ondemand"): return "/sys/devices/system/cpu/cpufreq/%s/sampling_down_factor" % governor @command_set("sampling_down_factor", per_device = True, priority = 10) def _set_sampling_down_factor(self, sampling_down_factor, device, sim): val = None # hack to clear governors map when the profile starts unloading # TODO: this should be handled better way, by e.g. currently non-implemented # Plugin.profile_load_finished() method if device in self._governors_map: self._governors_map.clear() self._governors_map[device] = None governor = self._get_governor(device) if governor is None: log.debug("ignoring sampling_down_factor setting for CPU '%s', cannot match governor" % device) return None if governor not in list(self._governors_map.values()): self._governors_map[device] = governor path = self._sampling_down_factor_path(governor) if not os.path.exists(path): log.debug("ignoring sampling_down_factor setting for CPU '%s', governor '%s' doesn't support it" % (device, governor)) return None val = str(sampling_down_factor) if not sim: log.info("setting sampling_down_factor to '%s' for governor '%s'" % (val, governor)) self._cmd.write_to_file(path, val) return val @command_get("sampling_down_factor") def _get_sampling_down_factor(self, device, ignore_missing=False): governor = self._get_governor(device, ignore_missing=ignore_missing) if governor is None: return None path = self._sampling_down_factor_path(governor) if not os.path.exists(path): return None return self._cmd.read_file(path).strip() def _try_set_energy_perf_bias(self, cpu_id, value): (retcode, out, err_msg) = self._cmd.execute( ["x86_energy_perf_policy", "-c", cpu_id, str(value) ], return_err = True) return (retcode, err_msg) @command_set("energy_perf_bias", per_device=True) def _set_energy_perf_bias(self, energy_perf_bias, device, sim): if not self._is_cpu_online(device): log.debug("%s is not online, skipping" % device) return None if self._has_energy_perf_bias: if not sim: cpu_id = device.lstrip("cpu") vals = energy_perf_bias.split('|') for val in vals: val = val.strip() log.debug("Trying to set energy_perf_bias to '%s' on cpu '%s'" % (val, device)) (retcode, err_msg) = self._try_set_energy_perf_bias( cpu_id, val) if retcode == 0: log.info("energy_perf_bias successfully set to '%s' on cpu '%s'" % (val, device)) break elif retcode < 0: log.error("Failed to set energy_perf_bias: %s" % err_msg) break else: log.debug("Could not set energy_perf_bias to '%s' on cpu '%s', trying another value" % (val, device)) else: log.error("Failed to set energy_perf_bias on cpu '%s'. Is the value in the profile correct?" % device) return str(energy_perf_bias) else: return None def _try_parse_num(self, s): try: v = int(s) except ValueError as e: try: v = int(s, 16) except ValueError as e: v = s return v # Before Linux 4.13 def _energy_perf_policy_to_human(self, s): return {0:"performance", 6:"normal", 15:"powersave"}.get(self._try_parse_num(s), s) # Since Linux 4.13 def _energy_perf_policy_to_human_v2(self, s): return {0:"performance", 4:"balance-performance", 6:"normal", 8:"balance-power", 15:"power", }.get(self._try_parse_num(s), s) @command_get("energy_perf_bias") def _get_energy_perf_bias(self, device, ignore_missing=False): energy_perf_bias = None if not self._is_cpu_online(device): log.debug("%s is not online, skipping" % device) return None if self._has_energy_perf_bias: cpu_id = device.lstrip("cpu") retcode, lines = self._cmd.execute(["x86_energy_perf_policy", "-c", cpu_id, "-r"]) if retcode == 0: for line in lines.splitlines(): l = line.split() if len(l) == 2: energy_perf_bias = self._energy_perf_policy_to_human(l[1]) break elif len(l) == 3: energy_perf_bias = self._energy_perf_policy_to_human_v2(l[2]) break return energy_perf_bias 0707010000014B000081A40000000000000000000000016391BC3A00004110000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_disk.pyimport errno from . import hotplug from .decorators import * import tuned.logs import tuned.consts as consts from tuned.utils.commands import commands import os import re log = tuned.logs.get() class DiskPlugin(hotplug.Plugin): """ `disk`:: Plug-in for tuning various block device options. This plug-in can also dynamically change the advanced power management and spindown timeout setting for a drive according to the current drive utilization. The dynamic tuning is controlled by the [option]`dynamic` and the global [option]`dynamic_tuning` option in `tuned-main.conf`. + The disk plug-in operates on all supported block devices unless a comma separated list of [option]`devices` is passed to it. + .Operate only on the sda block device ==== ---- [disk] # Comma separated list of devices, all devices if commented out. devices=sda ---- ==== + The [option]`elevator` option sets the Linux I/O scheduler. + .Use the bfq I/O scheduler on xvda block device ==== ---- [disk] device=xvda elevator=bfq ---- ==== + The [option]`scheduler_quantum` option only applies to the CFQ I/O scheduler. It defines the number of I/O requests that CFQ sends to one device at one time, essentially limiting queue depth. The default value is 8 requests. The device being used may support greater queue depth, but increasing the value of quantum will also increase latency, especially for large sequential write work loads. + The [option]`apm` option sets the Advanced Power Management feature on drives that support it. It corresponds to using the `-B` option of the `hdparm` utility. The [option]`spindown` option puts the drive into idle (low-power) mode, and also sets the standby (spindown) timeout for the drive. It corresponds to using `-S` option of the `hdparm` utility. + .Use a medium-agressive power management with spindown ==== ---- [disk] apm=128 spindown=6 ---- ==== + The [option]`readahead` option controls how much extra data the operating system reads from disk when performing sequential I/O operations. Increasing the `readahead` value might improve performance in application environments where sequential reading of large files takes place. The default unit for readahead is KiB. This can be adjusted to sectors by specifying the suffix 's'. If the suffix is specified, there must be at least one space between the number and suffix (for example, `readahead=8192 s`). + .Set the `readahead` to 4MB unless already set to a higher value ==== ---- [disk] readahead=>4096 ---- ==== The disk readahead value can be multiplied by the constant specified by the [option]`readahead_multiply` option. """ def __init__(self, *args, **kwargs): super(DiskPlugin, self).__init__(*args, **kwargs) self._power_levels = [254, 225, 195, 165, 145, 125, 105, 85, 70, 55, 30, 20] self._spindown_levels = [0, 250, 230, 210, 190, 170, 150, 130, 110, 90, 70, 60] self._levels = len(self._power_levels) self._level_steps = 6 self._load_smallest = 0.01 self._cmd = commands() def _init_devices(self): super(DiskPlugin, self)._init_devices() self._devices_supported = True self._use_hdparm = True self._free_devices = set() self._hdparm_apm_devices = set() for device in self._hardware_inventory.get_devices("block"): if self._device_is_supported(device): self._free_devices.add(device.sys_name) if self._use_hdparm and self._is_hdparm_apm_supported(device.sys_name): self._hdparm_apm_devices.add(device.sys_name) self._assigned_devices = set() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("block", x) for x in devices] def _is_hdparm_apm_supported(self, device): (rc, out, err_msg) = self._cmd.execute(["hdparm", "-C", "/dev/%s" % device], \ no_errors = [errno.ENOENT], return_err=True) if rc == -errno.ENOENT: log.warn("hdparm command not found, ignoring for other devices") self._use_hdparm = False return False elif rc: log.info("Device '%s' not supported by hdparm" % device) log.debug("(rc: %s, msg: '%s')" % (rc, err_msg)) return False elif "unknown" in out: log.info("Driver for device '%s' does not support apm command" % device) return False return True @classmethod def _device_is_supported(cls, device): return device.device_type == "disk" and \ device.attributes.get("removable", None) == b"0" and \ (device.parent is None or \ device.parent.subsystem in ["scsi", "virtio", "xen", "nvme"]) def _hardware_events_init(self): self._hardware_inventory.subscribe(self, "block", self._hardware_events_callback) def _hardware_events_cleanup(self): self._hardware_inventory.unsubscribe(self) def _hardware_events_callback(self, event, device): if self._device_is_supported(device) or event == "remove": super(DiskPlugin, self)._hardware_events_callback(event, device) def _added_device_apply_tuning(self, instance, device_name): if instance._load_monitor is not None: instance._load_monitor.add_device(device_name) super(DiskPlugin, self)._added_device_apply_tuning(instance, device_name) def _removed_device_unapply_tuning(self, instance, device_name): if instance._load_monitor is not None: instance._load_monitor.remove_device(device_name) super(DiskPlugin, self)._removed_device_unapply_tuning(instance, device_name) @classmethod def _get_config_options(cls): return { "dynamic" : True, # FIXME: do we want this default? "elevator" : None, "apm" : None, "spindown" : None, "readahead" : None, "readahead_multiply" : None, "scheduler_quantum" : None, } @classmethod def _get_config_options_used_by_dynamic(cls): return [ "apm", "spindown", ] def _instance_init(self, instance): instance._has_static_tuning = True self._apm_errcnt = 0 self._spindown_errcnt = 0 if self._option_bool(instance.options["dynamic"]): instance._has_dynamic_tuning = True instance._load_monitor = \ self._monitors_repository.create( "disk", instance.assigned_devices) instance._device_idle = {} instance._stats = {} instance._idle = {} instance._spindown_change_delayed = {} else: instance._has_dynamic_tuning = False instance._load_monitor = None def _instance_cleanup(self, instance): if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) instance._load_monitor = None def _update_errcnt(self, rc, spindown): if spindown: s = "spindown" cnt = self._spindown_errcnt else: s = "apm" cnt = self._apm_errcnt if cnt >= consts.ERROR_THRESHOLD: return if rc == 0: cnt = 0 elif rc == -errno.ENOENT: self._spindown_errcnt = self._apm_errcnt = consts.ERROR_THRESHOLD + 1 log.warn("hdparm command not found, ignoring future set_apm / set_spindown commands") return else: cnt += 1 if cnt == consts.ERROR_THRESHOLD: log.info("disabling set_%s command: too many consecutive errors" % s) if spindown: self._spindown_errcnt = cnt else: self._apm_errcnt = cnt def _change_spindown(self, instance, device, new_spindown_level): log.debug("changing spindown to %d" % new_spindown_level) (rc, out) = self._cmd.execute(["hdparm", "-S%d" % new_spindown_level, "/dev/%s" % device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, True) instance._spindown_change_delayed[device] = False def _drive_spinning(self, device): (rc, out) = self._cmd.execute(["hdparm", "-C", "/dev/%s" % device], no_errors = [errno.ENOENT]) return not "standby" in out and not "sleeping" in out def _instance_update_dynamic(self, instance, device): if not device in self._hdparm_apm_devices: return load = instance._load_monitor.get_device_load(device) if load is None: return if not device in instance._stats: self._init_stats_and_idle(instance, device) self._update_stats(instance, device, load) self._update_idle(instance, device) stats = instance._stats[device] idle = instance._idle[device] # level change decision if idle["level"] + 1 < self._levels and idle["read"] >= self._level_steps and idle["write"] >= self._level_steps: level_change = 1 elif idle["level"] > 0 and (idle["read"] == 0 or idle["write"] == 0): level_change = -1 else: level_change = 0 # change level if decided if level_change != 0: idle["level"] += level_change new_power_level = self._power_levels[idle["level"]] new_spindown_level = self._spindown_levels[idle["level"]] log.debug("tuning level changed to %d" % idle["level"]) if self._spindown_errcnt < consts.ERROR_THRESHOLD: if not self._drive_spinning(device) and level_change > 0: log.debug("delaying spindown change to %d, drive has already spun down" % new_spindown_level) instance._spindown_change_delayed[device] = True else: self._change_spindown(instance, device, new_spindown_level) if self._apm_errcnt < consts.ERROR_THRESHOLD: log.debug("changing APM_level to %d" % new_power_level) (rc, out) = self._cmd.execute(["hdparm", "-B%d" % new_power_level, "/dev/%s" % device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, False) elif instance._spindown_change_delayed[device] and self._drive_spinning(device): new_spindown_level = self._spindown_levels[idle["level"]] self._change_spindown(instance, device, new_spindown_level) log.debug("%s load: read %0.2f, write %0.2f" % (device, stats["read"], stats["write"])) log.debug("%s idle: read %d, write %d, level %d" % (device, idle["read"], idle["write"], idle["level"])) def _init_stats_and_idle(self, instance, device): instance._stats[device] = { "new": 11 * [0], "old": 11 * [0], "max": 11 * [1] } instance._idle[device] = { "level": 0, "read": 0, "write": 0 } instance._spindown_change_delayed[device] = False def _update_stats(self, instance, device, new_load): instance._stats[device]["old"] = old_load = instance._stats[device]["new"] instance._stats[device]["new"] = new_load # load difference diff = [new_old[0] - new_old[1] for new_old in zip(new_load, old_load)] instance._stats[device]["diff"] = diff # adapt maximum expected load if the difference is higher old_max_load = instance._stats[device]["max"] max_load = [max(pair) for pair in zip(old_max_load, diff)] instance._stats[device]["max"] = max_load # read/write ratio instance._stats[device]["read"] = float(diff[1]) / float(max_load[1]) instance._stats[device]["write"] = float(diff[5]) / float(max_load[5]) def _update_idle(self, instance, device): # increase counter if there is no load, otherwise reset the counter for operation in ["read", "write"]: if instance._stats[device][operation] < self._load_smallest: instance._idle[device][operation] += 1 else: instance._idle[device][operation] = 0 def _instance_apply_dynamic(self, instance, device): # At the moment we support dynamic tuning just for devices compatible with hdparm apm commands # If in future will be added new functionality not connected to this command, # it is needed to change it here if device not in self._hdparm_apm_devices: log.info("There is no dynamic tuning available for device '%s' at time" % device) else: super(DiskPlugin, self)._instance_apply_dynamic(instance, device) def _instance_unapply_dynamic(self, instance, device): pass def _sysfs_path(self, device, suffix, prefix = "/sys/block/"): if "/" in device: dev = os.path.join(prefix, device.replace("/", "!"), suffix) if os.path.exists(dev): return dev return os.path.join(prefix, device, suffix) def _elevator_file(self, device): return self._sysfs_path(device, "queue/scheduler") @command_set("elevator", per_device=True) def _set_elevator(self, value, device, sim): sys_file = self._elevator_file(device) if not sim: self._cmd.write_to_file(sys_file, value) return value @command_get("elevator") def _get_elevator(self, device, ignore_missing=False): sys_file = self._elevator_file(device) # example of scheduler file content: # noop deadline [cfq] return self._cmd.get_active_option(self._cmd.read_file(sys_file, no_error=ignore_missing)) @command_set("apm", per_device=True) def _set_apm(self, value, device, sim): if device not in self._hdparm_apm_devices: if not sim: log.info("apm option is not supported for device '%s'" % device) return None else: return str(value) if self._apm_errcnt < consts.ERROR_THRESHOLD: if not sim: (rc, out) = self._cmd.execute(["hdparm", "-B", str(value), "/dev/" + device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, False) return str(value) else: return None @command_get("apm") def _get_apm(self, device, ignore_missing=False): if device not in self._hdparm_apm_devices: if not ignore_missing: log.info("apm option is not supported for device '%s'" % device) return None value = None err = False (rc, out) = self._cmd.execute(["hdparm", "-B", "/dev/" + device], no_errors = [errno.ENOENT]) if rc == -errno.ENOENT: return None elif rc != 0: err = True else: m = re.match(r".*=\s*(\d+).*", out, re.S) if m: try: value = int(m.group(1)) except ValueError: err = True if err: log.error("could not get current APM settings for device '%s'" % device) return value @command_set("spindown", per_device=True) def _set_spindown(self, value, device, sim): if device not in self._hdparm_apm_devices: if not sim: log.info("spindown option is not supported for device '%s'" % device) return None else: return str(value) if self._spindown_errcnt < consts.ERROR_THRESHOLD: if not sim: (rc, out) = self._cmd.execute(["hdparm", "-S", str(value), "/dev/" + device], no_errors = [errno.ENOENT]) self._update_errcnt(rc, True) return str(value) else: return None @command_get("spindown") def _get_spindown(self, device, ignore_missing=False): if device not in self._hdparm_apm_devices: if not ignore_missing: log.info("spindown option is not supported for device '%s'" % device) return None # There's no way how to get current/old spindown value, hardcoding vendor specific 253 return 253 def _readahead_file(self, device): return self._sysfs_path(device, "queue/read_ahead_kb") def _parse_ra(self, value): val = str(value).split(None, 1) try: v = int(val[0]) except ValueError: return None if len(val) > 1 and val[1][0] == "s": # v *= 512 / 1024 v /= 2 return v @command_set("readahead", per_device=True) def _set_readahead(self, value, device, sim): sys_file = self._readahead_file(device) val = self._parse_ra(value) if val is None: log.error("Invalid readahead value '%s' for device '%s'" % (value, device)) else: if not sim: self._cmd.write_to_file(sys_file, "%d" % val) return val @command_get("readahead") def _get_readahead(self, device, ignore_missing=False): sys_file = self._readahead_file(device) value = self._cmd.read_file(sys_file, no_error=ignore_missing).strip() if len(value) == 0: return None return int(value) @command_custom("readahead_multiply", per_device=True) def _multiply_readahead(self, enabling, multiplier, device, verify, ignore_missing): if verify: return None storage_key = self._storage_key( command_name = "readahead_multiply", device_name = device) if enabling: old_readahead = self._get_readahead(device) if old_readahead is None: return new_readahead = int(float(multiplier) * old_readahead) self._storage.set(storage_key, old_readahead) self._set_readahead(new_readahead, device, False) else: old_readahead = self._storage.get(storage_key) if old_readahead is None: return self._set_readahead(old_readahead, device, False) self._storage.unset(storage_key) def _scheduler_quantum_file(self, device): return self._sysfs_path(device, "queue/iosched/quantum") @command_set("scheduler_quantum", per_device=True) def _set_scheduler_quantum(self, value, device, sim): sys_file = self._scheduler_quantum_file(device) if not sim: self._cmd.write_to_file(sys_file, "%d" % int(value)) return value @command_get("scheduler_quantum") def _get_scheduler_quantum(self, device, ignore_missing=False): sys_file = self._scheduler_quantum_file(device) value = self._cmd.read_file(sys_file, no_error=ignore_missing).strip() if len(value) == 0: if not ignore_missing: log.info("disk_scheduler_quantum option is not supported for device '%s'" % device) return None return int(value) 0707010000014C000081A40000000000000000000000016391BC3A00000B83000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_eeepc_she.pyfrom . import base from . import exceptions import tuned.logs from tuned.utils.commands import commands import os log = tuned.logs.get() class EeePCSHEPlugin(base.Plugin): """ `eeepc_she`:: Dynamically sets the front-side bus (FSB) speed according to the CPU load. This feature can be found on some netbooks and is also known as the Asus Super Hybrid Engine. If the CPU load is lower or equal to the value specified by the [option]`load_threshold_powersave` option, the plug-in sets the FSB speed to the value specified by the [option]`she_powersave` option. If the CPU load is higher or equal to the value specified by the [option]`load_threshold_normal` option, it sets the FSB speed to the value specified by the [option]`she_normal` option. Static tuning is not supported and the plug-in is transparently disabled if the hardware support for this feature is not detected. NOTE: For details about the FSB frequencies and corresponding values, see link:https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-platform-eeepc-laptop[the kernel documentation]. The provided defaults should work for most users. """ def __init__(self, *args, **kwargs): self._cmd = commands() self._control_file = "/sys/devices/platform/eeepc/cpufv" if not os.path.isfile(self._control_file): self._control_file = "/sys/devices/platform/eeepc-wmi/cpufv" if not os.path.isfile(self._control_file): raise exceptions.NotSupportedPluginException("Plugin is not supported on your hardware.") super(EeePCSHEPlugin, self).__init__(*args, **kwargs) @classmethod def _get_config_options(self): return { "load_threshold_normal" : 0.6, "load_threshold_powersave" : 0.4, "she_powersave" : 2, "she_normal" : 1, } def _instance_init(self, instance): instance._has_static_tuning = False instance._has_dynamic_tuning = True instance._she_mode = None instance._load_monitor = self._monitors_repository.create("load", None) def _instance_cleanup(self, instance): if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) instance._load_monitor = None def _instance_update_dynamic(self, instance, device): load = instance._load_monitor.get_load()["system"] if load <= instance.options["load_threshold_powersave"]: self._set_she_mode(instance, "powersave") elif load >= instance.options["load_threshold_normal"]: self._set_she_mode(instance, "normal") def _instance_unapply_dynamic(self, instance, device): # FIXME: restore previous value self._set_she_mode(instance, "normal") def _set_she_mode(self, instance, new_mode): new_mode_numeric = int(instance.options["she_%s" % new_mode]) if instance._she_mode != new_mode_numeric: log.info("new eeepc_she mode %s (%d) " % (new_mode, new_mode_numeric)) self._cmd.write_to_file(self._control_file, "%s" % new_mode_numeric) self._she_mode = new_mode_numeric 0707010000014D000081A40000000000000000000000016391BC3A00000DE2000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_irqbalance.pyfrom . import base from .decorators import command_custom from tuned import consts import tuned.logs import errno import perf import re log = tuned.logs.get() class IrqbalancePlugin(base.Plugin): """ `irqbalance`:: Plug-in for irqbalance settings management. The plug-in configures CPUs which should be skipped when rebalancing IRQs in `/etc/sysconfig/irqbalance`. It then restarts irqbalance if and only if it was previously running. + The banned/skipped CPUs are specified as a CPU list via the [option]`banned_cpus` option. + .Skip CPUs 2,4 and 9-13 when rebalancing IRQs ==== ---- [irqbalance] banned_cpus=2,4,9-13 ---- ==== """ def __init__(self, *args, **kwargs): super(IrqbalancePlugin, self).__init__(*args, **kwargs) self._cpus = perf.cpu_map() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "banned_cpus": None, } def _read_irqbalance_sysconfig(self): try: with open(consts.IRQBALANCE_SYSCONFIG_FILE, "r") as f: return f.read() except IOError as e: if e.errno == errno.ENOENT: log.warn("irqbalance sysconfig file is missing. Is irqbalance installed?") else: log.error("Failed to read irqbalance sysconfig file: %s" % e) return None def _write_irqbalance_sysconfig(self, content): try: with open(consts.IRQBALANCE_SYSCONFIG_FILE, "w") as f: f.write(content) return True except IOError as e: log.error("Failed to write irqbalance sysconfig file: %s" % e) return False def _write_banned_cpus(self, sysconfig, banned_cpumask): return sysconfig + "IRQBALANCE_BANNED_CPUS=%s\n" % banned_cpumask def _clear_banned_cpus(self, sysconfig): lines = [] for line in sysconfig.split("\n"): if not re.match(r"\s*IRQBALANCE_BANNED_CPUS=", line): lines.append(line) return "\n".join(lines) def _restart_irqbalance(self): # Exit code 5 means unit not found (see 'EXIT_NOTINSTALLED' in # systemd.exec(5)) retcode, out = self._cmd.execute( ["systemctl", "try-restart", "irqbalance"], no_errors=[5]) if retcode != 0: log.warn("Failed to restart irqbalance. Is it installed?") def _set_banned_cpus(self, banned_cpumask): content = self._read_irqbalance_sysconfig() if content is None: return content = self._clear_banned_cpus(content) content = self._write_banned_cpus(content, banned_cpumask) if self._write_irqbalance_sysconfig(content): self._restart_irqbalance() def _restore_banned_cpus(self): content = self._read_irqbalance_sysconfig() if content is None: return content = self._clear_banned_cpus(content) if self._write_irqbalance_sysconfig(content): self._restart_irqbalance() @command_custom("banned_cpus", per_device=False) def _banned_cpus(self, enabling, value, verify, ignore_missing): banned_cpumask = None if value is not None: banned = set(self._cmd.cpulist_unpack(value)) present = set(self._cpus) if banned.issubset(present): banned_cpumask = self._cmd.cpulist2hex(list(banned)) else: str_cpus = ",".join([str(x) for x in self._cpus]) log.error("Invalid banned_cpus specified, '%s' does not match available cores '%s'" % (value, str_cpus)) if (enabling or verify) and banned_cpumask is None: return None if verify: # Verification is currently not supported return None elif enabling: self._set_banned_cpus(banned_cpumask) else: self._restore_banned_cpus() 0707010000014E000081A40000000000000000000000016391BC3A00001320000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_modules.pyimport re import os.path from . import base from .decorators import * import tuned.logs from subprocess import * from tuned.utils.commands import commands import tuned.consts as consts log = tuned.logs.get() class ModulesPlugin(base.Plugin): """ `modules`:: Plug-in for applying custom kernel modules options. + This plug-in can set parameters to kernel modules. It creates `/etc/modprobe.d/tuned.conf` file. The syntax is `_module_=_option1=value1 option2=value2..._` where `_module_` is the module name and `_optionx=valuex_` are module options which may or may not be present. + .Load module `netrom` with module parameter `nr_ndevs=2` ==== ---- [modules] netrom=nr_ndevs=2 ---- ==== Modules can also be forced to load/reload by using an additional `+r` option prefix. + .(Re)load module `netrom` with module parameter `nr_ndevs=2` ==== ---- [modules] netrom=+r nr_ndevs=2 ---- ==== The `+r` switch will also cause *TuneD* to try and remove `netrom` module (if loaded) and try and (re)insert it with the specified parameters. The `+r` can be followed by an optional comma (`+r,`) for better readability. + When using `+r` the module will be loaded immediately by the *TuneD* daemon itself rather than waiting for the OS to load it with the specified parameters. """ def __init__(self, *args, **kwargs): super(ModulesPlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True instance._modules = instance.options def _instance_cleanup(self, instance): pass def _reload_modules(self, modules): for module in modules: retcode, out = self._cmd.execute(["modprobe", "-r", module]) if retcode < 0: log.warn("'modprobe' command not found, cannot reload kernel modules, reboot is required") return elif retcode > 0: log.debug("cannot remove kernel module '%s': %s" % (module, out.strip())) retcode, out = self._cmd.execute(["modprobe", module]) if retcode != 0: log.warn("cannot insert/reinsert module '%s', reboot is required: %s" % (module, out.strip())) def _instance_apply_static(self, instance): self._clear_modprobe_file() s = "" retcode = 0 skip_check = False reload_list = [] for option, value in list(instance._modules.items()): module = self._variables.expand(option) v = self._variables.expand(value) if not skip_check: retcode, out = self._cmd.execute(["modinfo", module]) if retcode < 0: skip_check = True log.warn("'modinfo' command not found, not checking kernel modules") elif retcode > 0: log.error("kernel module '%s' not found, skipping it" % module) if skip_check or retcode == 0: if len(v) > 1 and v[0:2] == "+r": v = re.sub(r"^\s*\+r\s*,?\s*", "", v) reload_list.append(module) if len(v) > 0: s += "options " + module + " " + v + "\n" else: log.debug("module '%s' doesn't have any option specified, not writing it to modprobe.d" % module) self._cmd.write_to_file(consts.MODULES_FILE, s) l = len(reload_list) if l > 0: self._reload_modules(reload_list) if len(instance._modules) != l: log.info(consts.STR_HINT_REBOOT) def _unquote_path(self, path): return str(path).replace("/", "") def _instance_verify_static(self, instance, ignore_missing, devices): ret = True # not all modules exports all their parameteters through sysfs, so hardcode check with ignore_missing ignore_missing = True r = re.compile(r"\s+") for option, value in list(instance._modules.items()): module = self._variables.expand(option) v = self._variables.expand(value) v = re.sub(r"^\s*\+r\s*,?\s*", "", v) mpath = "/sys/module/%s" % module if not os.path.exists(mpath): ret = False log.error(consts.STR_VERIFY_PROFILE_FAIL % "module '%s' is not loaded" % module) else: log.info(consts.STR_VERIFY_PROFILE_OK % "module '%s' is loaded" % module) l = r.split(v) for item in l: arg = item.split("=") if len(arg) != 2: log.warn("unrecognized module option for module '%s': %s" % (module, item)) else: if self._verify_value(arg[0], arg[1], self._cmd.read_file(mpath + "/parameters/" + self._unquote_path(arg[0]), err_ret = None, no_error = True), ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): if full_rollback: self._clear_modprobe_file() def _clear_modprobe_file(self): s = self._cmd.read_file(consts.MODULES_FILE, no_error = True) l = s.split("\n") i = j = 0 ll = len(l) r = re.compile(r"^\s*#") while i < ll: if r.search(l[i]) is None: j = i i = ll i += 1 s = "\n".join(l[0:j]) if len(s) > 0: s += "\n" self._cmd.write_to_file(consts.MODULES_FILE, s) 0707010000014F000081A40000000000000000000000016391BC3A000015CC000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_mounts.pyimport tuned.consts as consts from . import base from .decorators import * from subprocess import Popen,PIPE import tuned.logs from tuned.utils.commands import commands import glob log = tuned.logs.get() cmd = commands() class MountsPlugin(base.Plugin): """ `mounts`:: Enables or disables barriers for mounts according to the value of the [option]`disable_barriers` option. The [option]`disable_barriers` option has an optional value `force` which disables barriers even on mountpoints with write back caches. Note that only extended file systems (ext) are supported by this plug-in. """ @classmethod def _generate_mountpoint_topology(cls): """ Gets the information about disks, partitions and mountpoints. Stores information about used filesystem and creates a list of all underlying devices (in case of LVM) for each mountpoint. """ mountpoint_topology = {} current_disk = None stdout, stderr = Popen(["lsblk", "-rno", \ "TYPE,RM,KNAME,FSTYPE,MOUNTPOINT"], \ stdout=PIPE, stderr=PIPE, close_fds=True, \ universal_newlines = True).communicate() for columns in [line.split() for line in stdout.splitlines()]: if len(columns) < 3: continue device_type, device_removable, device_name = columns[:3] filesystem = columns[3] if len(columns) > 3 else None mountpoint = columns[4] if len(columns) > 4 else None if device_type == "disk": current_disk = device_name continue # skip removable, skip nonpartitions if device_removable == "1" or device_type not in ["part", "lvm"]: continue if mountpoint is None or mountpoint == "[SWAP]": continue mountpoint_topology.setdefault(mountpoint, {"disks": set(), "device_name": device_name, "filesystem": filesystem}) mountpoint_topology[mountpoint]["disks"].add(current_disk) cls._mountpoint_topology = mountpoint_topology def _init_devices(self): self._generate_mountpoint_topology() self._devices_supported = True self._free_devices = set(self._mountpoint_topology.keys()) self._assigned_devices = set() @classmethod def _get_config_options(self): return { "disable_barriers": None, } def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass def _get_device_cache_type(self, device): """ Get device cache type. This will work only for devices on SCSI kernel subsystem. """ source_filenames = glob.glob("/sys/block/%s/device/scsi_disk/*/cache_type" % device) for source_filename in source_filenames: return cmd.read_file(source_filename).strip() return None def _mountpoint_has_writeback_cache(self, mountpoint): """ Checks if the device has 'write back' cache. If the cache type cannot be determined, asume some other cache. """ for device in self._mountpoint_topology[mountpoint]["disks"]: if self._get_device_cache_type(device) == "write back": return True return False def _mountpoint_has_barriers(self, mountpoint): """ Checks if a given mountpoint is mounted with barriers enabled or disabled. """ with open("/proc/mounts") as mounts_file: for line in mounts_file: # device mountpoint filesystem options dump check columns = line.split() if columns[0][0] != "/": continue if columns[1] == mountpoint: option_list = columns[3] break else: return None options = option_list.split(",") for option in options: (name, sep, value) = option.partition("=") # nobarrier barrier=0 if name == "nobarrier" or (name == "barrier" and value == "0"): return False # barrier barrier=1 elif name == "barrier": return True else: # default return True def _remount_partition(self, partition, options): """ Remounts partition. """ remount_command = ["/usr/bin/mount", partition, "-o", "remount,%s" % options] cmd.execute(remount_command) @command_custom("disable_barriers", per_device=True) def _disable_barriers(self, start, value, mountpoint, verify, ignore_missing): storage_key = self._storage_key( command_name = "disable_barriers", device_name = mountpoint) force = str(value).lower() == "force" value = force or self._option_bool(value) if start: if not value: return None reject_reason = None if not self._mountpoint_topology[mountpoint]["filesystem"].startswith("ext"): reject_reason = "filesystem not supported" elif not force and self._mountpoint_has_writeback_cache(mountpoint): reject_reason = "device uses write back cache" else: original_value = self._mountpoint_has_barriers(mountpoint) if original_value is None: reject_reason = "unknown current setting" elif original_value == False: if verify: log.info(consts.STR_VERIFY_PROFILE_OK % mountpoint) return True else: reject_reason = "barriers already disabled" elif verify: log.error(consts.STR_VERIFY_PROFILE_FAIL % mountpoint) return False if reject_reason is not None: log.info("not disabling barriers on '%s' (%s)" % (mountpoint, reject_reason)) return None self._storage.set(storage_key, original_value) log.info("disabling barriers on '%s'" % mountpoint) self._remount_partition(mountpoint, "barrier=0") else: if verify: return None original_value = self._storage.get(storage_key) if original_value is None: return None log.info("enabling barriers on '%s'" % mountpoint) self._remount_partition(mountpoint, "barrier=1") self._storage.unset(storage_key) return None 07070100000150000081A40000000000000000000000016391BC3A00005A31000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_net.pyimport errno from . import base from .decorators import * import tuned.logs from tuned.utils.nettool import ethcard from tuned.utils.commands import commands import os import re log = tuned.logs.get() WOL_VALUES = "pumbagsd" class NetTuningPlugin(base.Plugin): """ `net`:: Configures network driver, hardware and Netfilter settings. Dynamic change of the interface speed according to the interface utilization is also supported. The dynamic tuning is controlled by the [option]`dynamic` and the global [option]`dynamic_tuning` option in `tuned-main.conf`. + The [option]`wake_on_lan` option sets wake-on-lan to the specified value as when using the `ethtool` utility. + .Set Wake-on-LAN for device eth0 on MagicPacket(TM) ==== ---- [net] devices=eth0 wake_on_lan=g ---- ==== + The [option]`coalesce` option allows changing coalescing settings for the specified network devices. The syntax is: + [subs="+quotes,+macros"] ---- coalesce=__param1__ __value1__ __param2__ __value2__ ... __paramN__ __valueN__ ---- Note that not all the coalescing parameters are supported by all network cards. For the list of coalescing parameters of your network device, use `ethtool -c device`. + .Setting coalescing parameters rx/tx-usecs for all network devices ==== ---- [net] coalesce=rx-usecs 3 tx-usecs 16 ---- ==== + The [option]`features` option allows changing the offload parameters and other features for the specified network devices. To query the features of your network device, use `ethtool -k device`. The syntax of the option is the same as the [option]`coalesce` option. + .Turn off TX checksumming, generic segmentation and receive offload ==== ---- [net] features=tx off gso off gro off ---- ==== The [option]`pause` option allows changing the pause parameters for the specified network devices. To query the pause parameters of your network device, use `ethtool -a device`. The syntax of the option is the same as the [option]`coalesce` option. + .Disable autonegotiation ==== ---- [net] pause=autoneg off ---- ==== + The [option]`ring` option allows changing the rx/tx ring parameters for the specified network devices. To query the ring parameters of your network device, use `ethtool -g device`. The syntax of the option is the same as the [option]`coalesce` option. + .Change the number of ring entries for the Rx/Tx rings to 1024/512 respectively ===== ----- [net] ring=rx 1024 tx 512 ----- ===== + The [option]`channels` option allows changing the numbers of channels for the specified network device. A channel is an IRQ and the set of queues that can trigger that IRQ. To query the channels parameters of your network device, use `ethtool -l device`. The syntax of the option is the same as the [option]`coalesce` option. + .Set the number of multi-purpose channels to 16 ===== ----- [net] channels=combined 16 ----- ===== + A network device either supports rx/tx or combined queue mode. The [option]`channels` option automatically adjusts the parameters based on the mode supported by the device as long as a valid configuration is requested. + The [option]`nf_conntrack_hashsize` option sets the size of the hash table which stores lists of conntrack entries by writing to `/sys/module/nf_conntrack/parameters/hashsize`. + .Adjust the size of the conntrack hash table ==== ---- [net] nf_conntrack_hashsize=131072 ---- ==== + The [option]`txqueuelen` option allows changing txqueuelen (the length of the transmit queue). It uses `ip` utility that is in package iproute recommended for TuneD, so the package needs to be installed for its correct functionality. To query the txqueuelen parameters of your network device use `ip link show` and the current value is shown after the qlen column. + .Adjust the length of the transmit queue ==== ---- [net] txqueuelen=5000 ---- ==== + The [option]`mtu` option allows changing MTU (Maximum Transmission Unit). It uses `ip` utility that is in package iproute recommended for TuneD, so the package needs to be installed for its correct functionality. To query the MTU parameters of your network device use `ip link show` and the current value is shown after the MTU column. + .Adjust the size of the MTU ==== ---- [net] mtu=9000 ---- ==== """ def __init__(self, *args, **kwargs): super(NetTuningPlugin, self).__init__(*args, **kwargs) self._load_smallest = 0.05 self._level_steps = 6 self._cmd = commands() self._re_ip_link_show = {} self._use_ip = True def _init_devices(self): self._devices_supported = True self._free_devices = set() self._assigned_devices = set() re_not_virtual = re.compile('(?!.*/virtual/.*)') for device in self._hardware_inventory.get_devices("net"): if re_not_virtual.match(device.device_path): self._free_devices.add(device.sys_name) log.debug("devices: %s" % str(self._free_devices)); def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("net", x) for x in devices] def _instance_init(self, instance): instance._has_static_tuning = True if self._option_bool(instance.options["dynamic"]): instance._has_dynamic_tuning = True instance._load_monitor = self._monitors_repository.create("net", instance.assigned_devices) instance._idle = {} instance._stats = {} else: instance._has_dynamic_tuning = False instance._load_monitor = None instance._idle = None instance._stats = None def _instance_cleanup(self, instance): if instance._load_monitor is not None: self._monitors_repository.delete(instance._load_monitor) instance._load_monitor = None def _instance_apply_dynamic(self, instance, device): self._instance_update_dynamic(instance, device) def _instance_update_dynamic(self, instance, device): load = [int(value) for value in instance._load_monitor.get_device_load(device)] if load is None: return if not device in instance._stats: self._init_stats_and_idle(instance, device) self._update_stats(instance, device, load) self._update_idle(instance, device) stats = instance._stats[device] idle = instance._idle[device] if idle["level"] == 0 and idle["read"] >= self._level_steps and idle["write"] >= self._level_steps: idle["level"] = 1 log.info("%s: setting 100Mbps" % device) ethcard(device).set_speed(100) elif idle["level"] == 1 and (idle["read"] == 0 or idle["write"] == 0): idle["level"] = 0 log.info("%s: setting max speed" % device) ethcard(device).set_max_speed() log.debug("%s load: read %0.2f, write %0.2f" % (device, stats["read"], stats["write"])) log.debug("%s idle: read %d, write %d, level %d" % (device, idle["read"], idle["write"], idle["level"])) @classmethod def _get_config_options_coalesce(cls): return { "adaptive-rx": None, "adaptive-tx": None, "rx-usecs": None, "rx-frames": None, "rx-usecs-irq": None, "rx-frames-irq": None, "tx-usecs": None, "tx-frames": None, "tx-usecs-irq": None, "tx-frames-irq": None, "stats-block-usecs": None, "pkt-rate-low": None, "rx-usecs-low": None, "rx-frames-low": None, "tx-usecs-low": None, "tx-frames-low": None, "pkt-rate-high": None, "rx-usecs-high": None, "rx-frames-high": None, "tx-usecs-high": None, "tx-frames-high": None, "sample-interval": None } @classmethod def _get_config_options_pause(cls): return { "autoneg": None, "rx": None, "tx": None } @classmethod def _get_config_options_ring(cls): return { "rx": None, "rx-mini": None, "rx-jumbo": None, "tx": None } @classmethod def _get_config_options_channels(cls): return { "rx": None, "tx": None, "other": None, "combined": None } @classmethod def _get_config_options(cls): return { "dynamic": True, "wake_on_lan": None, "nf_conntrack_hashsize": None, "features": None, "coalesce": None, "pause": None, "ring": None, "channels": None, "txqueuelen": None, "mtu": None, } def _init_stats_and_idle(self, instance, device): max_speed = self._calc_speed(ethcard(device).get_max_speed()) instance._stats[device] = { "new": 4 * [0], "max": 2 * [max_speed, 1] } instance._idle[device] = { "level": 0, "read": 0, "write": 0 } def _update_stats(self, instance, device, new_load): # put new to old instance._stats[device]["old"] = old_load = instance._stats[device]["new"] instance._stats[device]["new"] = new_load # load difference diff = [new_old[0] - new_old[1] for new_old in zip(new_load, old_load)] instance._stats[device]["diff"] = diff # adapt maximum expected load if the difference is higer old_max_load = instance._stats[device]["max"] max_load = [max(pair) for pair in zip(old_max_load, diff)] instance._stats[device]["max"] = max_load # read/write ratio instance._stats[device]["read"] = float(diff[0]) / float(max_load[0]) instance._stats[device]["write"] = float(diff[2]) / float(max_load[2]) def _update_idle(self, instance, device): # increase counter if there is no load, otherwise reset the counter for operation in ["read", "write"]: if instance._stats[device][operation] < self._load_smallest: instance._idle[device][operation] += 1 else: instance._idle[device][operation] = 0 def _instance_unapply_dynamic(self, instance, device): if device in instance._idle and instance._idle[device]["level"] > 0: instance._idle[device]["level"] = 0 log.info("%s: setting max speed" % device) ethcard(device).set_max_speed() def _calc_speed(self, speed): # 0.6 is just a magical constant (empirical value): Typical workload on netcard won't exceed # that and if it does, then the code is smart enough to adapt it. # 1024 * 1024 as for MB -> B # speed / 7 Mb -> MB return (int) (0.6 * 1024 * 1024 * speed / 8) # parse features/coalesce config parameters (those defined in profile configuration) # context is for error message def _parse_config_parameters(self, value, context): # split supporting various dellimeters v = str(re.sub(r"(:\s*)|(\s+)|(\s*;\s*)|(\s*,\s*)", " ", value)).split() lv = len(v) if lv % 2 != 0: log.error("invalid %s parameter: '%s'" % (context, str(value))) return None if lv == 0: return dict() # convert flat list to dict return dict(list(zip(v[::2], v[1::2]))) # parse features/coalesce device parameters (those returned by ethtool) def _parse_device_parameters(self, value): # substitute "Adaptive RX: val1 TX: val2" to 'adaptive-rx: val1' and # 'adaptive-tx: val2' and workaround for ethtool inconsistencies # (rhbz#1225375) value = self._cmd.multiple_re_replace({ "Adaptive RX:": "adaptive-rx:", "\s+TX:": "\nadaptive-tx:", "rx-frame-low:": "rx-frames-low:", "rx-frame-high:": "rx-frames-high:", "tx-frame-low:": "tx-frames-low:", "tx-frame-high:": "tx-frames-high:", "large-receive-offload:": "lro:", "rx-checksumming:": "rx:", "tx-checksumming:": "tx:", "scatter-gather:": "sg:", "tcp-segmentation-offload:": "tso:", "udp-fragmentation-offload:": "ufo:", "generic-segmentation-offload:": "gso:", "generic-receive-offload:": "gro:", "rx-vlan-offload:": "rxvlan:", "tx-vlan-offload:": "txvlan:", "ntuple-filters:": "ntuple:", "receive-hashing:": "rxhash:", }, value) # remove empty lines, remove fixed parameters (those with "[fixed]") vl = [v for v in value.split('\n') if len(str(v)) > 0 and not re.search("\[fixed\]$", str(v))] if len(vl) < 2: return None # skip first line (device name), split to key/value, # remove pairs which are not key/value return dict([u for u in [re.split(r":\s*", str(v)) for v in vl[1:]] if len(u) == 2]) @classmethod def _nf_conntrack_hashsize_path(self): return "/sys/module/nf_conntrack/parameters/hashsize" @command_set("wake_on_lan", per_device=True) def _set_wake_on_lan(self, value, device, sim): if value is None: return None # see man ethtool for possible wol values, 0 added as an alias for 'd' value = re.sub(r"0", "d", str(value)); if not re.match(r"^[" + WOL_VALUES + r"]+$", value): log.warn("Incorrect 'wake_on_lan' value.") return None if not sim: self._cmd.execute(["ethtool", "-s", device, "wol", value]) return value @command_get("wake_on_lan") def _get_wake_on_lan(self, device, ignore_missing=False): value = None try: m = re.match(r".*Wake-on:\s*([" + WOL_VALUES + "]+).*", self._cmd.execute(["ethtool", device])[1], re.S) if m: value = m.group(1) except IOError: pass return value @command_set("nf_conntrack_hashsize") def _set_nf_conntrack_hashsize(self, value, sim): if value is None: return None hashsize = int(value) if hashsize >= 0: if not sim: self._cmd.write_to_file(self._nf_conntrack_hashsize_path(), hashsize) return hashsize else: return None @command_get("nf_conntrack_hashsize") def _get_nf_conntrack_hashsize(self): value = self._cmd.read_file(self._nf_conntrack_hashsize_path()) if len(value) > 0: return int(value) return None def _call_ip_link(self, args=[]): if not self._use_ip: return None args = ["ip", "link"] + args (rc, out, err_msg) = self._cmd.execute(args, no_errors=[errno.ENOENT], return_err=True) if rc == -errno.ENOENT: log.warn("ip command not found, ignoring for other devices") self._use_ip = False return None elif rc: log.info("Problem calling ip command") log.debug("(rc: %s, msg: '%s')" % (rc, err_msg)) return None return out def _ip_link_show(self, device=None): args = ["show"] if device: args.append(device) return self._call_ip_link(args) @command_set("txqueuelen", per_device=True) def _set_txqueuelen(self, value, device, sim): if value is None: return None try: int(value) except ValueError: log.warn("txqueuelen value '%s' is not integer" % value) return None if not sim: # there is inconsistency in "ip", where "txqueuelen" is set as it, but is shown as "qlen" res = self._call_ip_link(["set", "dev", device, "txqueuelen", value]) if res is None: log.warn("Cannot set txqueuelen for device '%s'" % device) return None return value def _get_re_ip_link_show(self, arg): """ Return regex for int arg value from "ip link show" command """ if arg not in self._re_ip_link_show: self._re_ip_link_show[arg] = re.compile(r".*\s+%s\s+(\d+)" % arg) return self._re_ip_link_show[arg] @command_get("txqueuelen") def _get_txqueuelen(self, device, ignore_missing=False): out = self._ip_link_show(device) if out is None: if not ignore_missing: log.info("Cannot get 'ip link show' result for txqueuelen value for device '%s'" % device) return None res = self._get_re_ip_link_show("qlen").search(out) if res is None: # We can theoretically get device without qlen (http://linux-ip.net/gl/ip-cref/ip-cref-node17.html) if not ignore_missing: log.info("Cannot get txqueuelen value from 'ip link show' result for device '%s'" % device) return None return res.group(1) @command_set("mtu", per_device=True) def _set_mtu(self, value, device, sim): if value is None: return None try: int(value) except ValueError: log.warn("mtu value '%s' is not integer" % value) return None if not sim: res = self._call_ip_link(["set", "dev", device, "mtu", value]) if res is None: log.warn("Cannot set mtu for device '%s'" % device) return None return value @command_get("mtu") def _get_mtu(self, device, ignore_missing=False): out = self._ip_link_show(device) if out is None: if not ignore_missing: log.info("Cannot get 'ip link show' result for mtu value for device '%s'" % device) return None res = self._get_re_ip_link_show("mtu").search(out) if res is None: # mtu value should be always present, but it's better to have a test if not ignore_missing: log.info("Cannot get mtu value from 'ip link show' result for device '%s'" % device) return None return res.group(1) # d is dict: {parameter: value} def _check_parameters(self, context, d): if context == "features": return True params = set(d.keys()) supported_getter = { "coalesce": self._get_config_options_coalesce, \ "pause": self._get_config_options_pause, \ "ring": self._get_config_options_ring, \ "channels": self._get_config_options_channels } supported = set(supported_getter[context]().keys()) if not params.issubset(supported): log.error("unknown %s parameter(s): %s" % (context, str(params - supported))) return False return True # parse output of ethtool -a def _parse_pause_parameters(self, s): s = self._cmd.multiple_re_replace(\ {"Autonegotiate": "autoneg", "RX": "rx", "TX": "tx"}, s) l = s.split("\n")[1:] l = [x for x in l if x != '' and not re.search(r"\[fixed\]", x)] return dict([x for x in [re.split(r":\s*", x) for x in l] if len(x) == 2]) # parse output of ethtool -g def _parse_ring_parameters(self, s): a = re.split(r"^Current hardware settings:$", s, flags=re.MULTILINE) s = a[1] s = self._cmd.multiple_re_replace(\ {"RX": "rx", "RX Mini": "rx-mini", "RX Jumbo": "rx-jumbo", "TX": "tx"}, s) l = s.split("\n") l = [x for x in l if x != ''] l = [x for x in [re.split(r":\s*", x) for x in l] if len(x) == 2] return dict(l) # parse output of ethtool -l def _parse_channels_parameters(self, s): a = re.split(r"^Current hardware settings:$", s, flags=re.MULTILINE) s = a[1] s = self._cmd.multiple_re_replace(\ {"RX": "rx", "TX": "tx", "Other": "other", "Combined": "combined"}, s) l = s.split("\n") l = [x for x in l if x != ''] l = [x for x in [re.split(r":\s*", x) for x in l] if len(x) == 2] return dict(l) def _replace_channels_parameters(self, context, params_list, dev_params): mod_params_list = [] if "combined" in params_list: mod_params_list.extend(["rx", params_list[1], "tx", params_list[1]]) else: cnt = str(max(int(params_list[1]), int(params_list[3]))) mod_params_list.extend(["combined", cnt]) return dict(list(zip(mod_params_list[::2], mod_params_list[1::2]))) def _check_device_support(self, context, parameters, device, dev_params): """Filter unsupported parameters and log warnings about it Positional parameters: context -- context of change parameters -- parameters to change device -- name of device on which should be parameters set dev_params -- dictionary of currently known parameters of device """ supported_parameters = set(dev_params.keys()) parameters_to_change = set(parameters.keys()) # if parameters_to_change contains unsupported parameter(s) then remove # it/them unsupported_parameters = (parameters_to_change - supported_parameters) for param in unsupported_parameters: log.warning("%s parameter %s is not supported by device %s" % ( context, param, device, )) parameters.pop(param, None) def _get_device_parameters(self, context, device): context2opt = { "coalesce": "-c", "features": "-k", "pause": "-a", "ring": "-g", \ "channels": "-l"} opt = context2opt[context] ret, value = self._cmd.execute(["ethtool", opt, device]) if ret != 0 or len(value) == 0: return None context2parser = { "coalesce": self._parse_device_parameters, \ "features": self._parse_device_parameters, \ "pause": self._parse_pause_parameters, \ "ring": self._parse_ring_parameters, \ "channels": self._parse_channels_parameters } parser = context2parser[context] d = parser(value) if context == "coalesce" and not self._check_parameters(context, d): return None return d def _set_device_parameters(self, context, value, device, sim, dev_params = None): if value is None or len(value) == 0: return None d = self._parse_config_parameters(value, context) if d is None or not self._check_parameters(context, d): return {} # check if device supports parameters and filter out unsupported ones if dev_params: self._check_device_support(context, d, device, dev_params) # replace the channel parameters based on the device support if context == "channels" and str(dev_params[next(iter(d))]) in ["n/a", "0"]: d = self._replace_channels_parameters(context, self._cmd.dict2list(d), dev_params) if not sim and len(d) != 0: log.debug("setting %s: %s" % (context, str(d))) context2opt = { "coalesce": "-C", "features": "-K", "pause": "-A", "ring": "-G", \ "channels": "-L"} opt = context2opt[context] # ignore ethtool return code 80, it means parameter is already set self._cmd.execute(["ethtool", opt, device] + self._cmd.dict2list(d), no_errors = [80]) return d def _custom_parameters(self, context, start, value, device, verify): storage_key = self._storage_key( command_name = context, device_name = device) if start: params_current = self._get_device_parameters(context, device) if params_current is None or len(params_current) == 0: return False params_set = self._set_device_parameters(context, value, device, verify, dev_params = params_current) # if none of parameters passed checks then the command completely # failed if params_set is None or len(params_set) == 0: return False relevant_params_current = [(param, value) for param, value in params_current.items() if param in params_set] relevant_params_current = dict(relevant_params_current) if verify: res = (self._cmd.dict2list(params_set) == self._cmd.dict2list(relevant_params_current)) self._log_verification_result(context, res, params_set, relevant_params_current, device = device) return res # saved are only those parameters which passed checks self._storage.set(storage_key, " ".join( self._cmd.dict2list(relevant_params_current))) else: original_value = self._storage.get(storage_key) # in storage are only those parameters which were already tested # so skip check for supported parameters self._set_device_parameters(context, original_value, device, False) return None @command_custom("features", per_device = True) def _features(self, start, value, device, verify, ignore_missing): return self._custom_parameters("features", start, value, device, verify) @command_custom("coalesce", per_device = True) def _coalesce(self, start, value, device, verify, ignore_missing): return self._custom_parameters("coalesce", start, value, device, verify) @command_custom("pause", per_device = True) def _pause(self, start, value, device, verify, ignore_missing): return self._custom_parameters("pause", start, value, device, verify) @command_custom("ring", per_device = True) def _ring(self, start, value, device, verify, ignore_missing): return self._custom_parameters("ring", start, value, device, verify) @command_custom("channels", per_device = True) def _channels(self, start, value, device, verify, ignore_missing): return self._custom_parameters("channels", start, value, device, verify) 07070100000151000081A40000000000000000000000016391BC3A00000455000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_rtentsk.pyfrom . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import glob import socket import time log = tuned.logs.get() class RTENTSKPlugin(base.Plugin): """ `rtentsk`:: Plugin for avoiding interruptions due to static key IPIs due to opening socket with timestamping enabled (by opening a socket ourselves the static key is kept enabled). """ def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False # SO_TIMESTAMP nor SOF_TIMESTAMPING_OPT_TX_SWHW is defined by # the socket class SO_TIMESTAMP = 29 # see include/uapi/asm-generic/socket.h #define SO_TIMESTAMP 0x4012 # parisc! SOF_TIMESTAMPING_OPT_TX_SWHW = (1<<14) # see include/uapi/linux/net_tstamp.h s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP) s.setsockopt(socket.SOL_SOCKET, SO_TIMESTAMP, SOF_TIMESTAMPING_OPT_TX_SWHW) self.rtentsk_socket = s log.info("opened SOF_TIMESTAMPING_OPT_TX_SWHW socket") def _instance_cleanup(self, instance): s = self.rtentsk_socket s.close() 07070100000152000081A40000000000000000000000016391BC3A0000D756000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_scheduler.py# code for cores isolation was inspired by Tuna implementation # perf code was borrowed from kernel/tools/perf/python/twatch.py # thanks to Arnaldo Carvalho de Melo <acme@redhat.com> from . import base from .decorators import * import tuned.logs import re from subprocess import * import threading import perf import select import tuned.consts as consts import procfs from tuned.utils.commands import commands import errno import os import collections import math # Check existence of scheduler API in os module try: os.SCHED_FIFO except AttributeError: import schedutils log = tuned.logs.get() class SchedulerParams(object): def __init__(self, cmd, cmdline = None, scheduler = None, priority = None, affinity = None, cgroup = None): self._cmd = cmd self.cmdline = cmdline self.scheduler = scheduler self.priority = priority self.affinity = affinity self.cgroup = cgroup @property def affinity(self): if self._affinity is None: return None else: return self._cmd.bitmask2cpulist(self._affinity) @affinity.setter def affinity(self, value): if value is None: self._affinity = None else: self._affinity = self._cmd.cpulist2bitmask(value) class IRQAffinities(object): def __init__(self): self.irqs = {} self.default = None # IRQs that don't support changing CPU affinity: self.unchangeable = [] class SchedulerUtils(object): """ Class encapsulating scheduler implementation in os module """ _dict_schedcfg2schedconst = { "f": "SCHED_FIFO", "b": "SCHED_BATCH", "r": "SCHED_RR", "o": "SCHED_OTHER", "i": "SCHED_IDLE", } def __init__(self): # {"f": os.SCHED_FIFO...} self._dict_schedcfg2num = dict((k, getattr(os, name)) for k, name in self._dict_schedcfg2schedconst.items()) # { os.SCHED_FIFO: "SCHED_FIFO"... } self._dict_num2schedconst = dict((getattr(os, name), name) for name in self._dict_schedcfg2schedconst.values()) def sched_cfg_to_num(self, str_scheduler): return self._dict_schedcfg2num.get(str_scheduler) # Reimplementation of schedstr from schedutils for logging purposes def sched_num_to_const(self, scheduler): return self._dict_num2schedconst.get(scheduler) def get_scheduler(self, pid): return os.sched_getscheduler(pid) def set_scheduler(self, pid, sched, prio): os.sched_setscheduler(pid, sched, os.sched_param(prio)) def get_affinity(self, pid): return os.sched_getaffinity(pid) def set_affinity(self, pid, affinity): os.sched_setaffinity(pid, affinity) def get_priority(self, pid): return os.sched_getparam(pid).sched_priority def get_priority_min(self, sched): return os.sched_get_priority_min(sched) def get_priority_max(self, sched): return os.sched_get_priority_max(sched) class SchedulerUtilsSchedutils(SchedulerUtils): """ Class encapsulating scheduler implementation in schedutils module """ def __init__(self): # { "f": schedutils.SCHED_FIFO... } self._dict_schedcfg2num = dict((k, getattr(schedutils, name)) for k, name in self._dict_schedcfg2schedconst.items()) # { schedutils.SCHED_FIFO: "SCHED_FIFO"... } self._dict_num2schedconst = dict((getattr(schedutils, name), name) for name in self._dict_schedcfg2schedconst.values()) def get_scheduler(self, pid): return schedutils.get_scheduler(pid) def set_scheduler(self, pid, sched, prio): schedutils.set_scheduler(pid, sched, prio) def get_affinity(self, pid): return schedutils.get_affinity(pid) def set_affinity(self, pid, affinity): schedutils.set_affinity(pid, affinity) def get_priority(self, pid): return schedutils.get_priority(pid) def get_priority_min(self, sched): return schedutils.get_priority_min(sched) def get_priority_max(self, sched): return schedutils.get_priority_max(sched) class SchedulerPlugin(base.Plugin): """ `scheduler`:: Allows tuning of scheduling priorities, process/thread/IRQ affinities, and CPU isolation. + To prevent processes/threads/IRQs from using certain CPUs, use the [option]`isolated_cores` option. It changes process/thread affinities, IRQs affinities and it sets `default_smp_affinity` for IRQs. The CPU affinity mask is adjusted for all processes and threads matching [option]`ps_whitelist` option subject to success of the `sched_setaffinity()` system call. The default setting of the [option]`ps_whitelist` regular expression is `.*` to match all processes and thread names. To exclude certain processes and threads use [option]`ps_blacklist` option. The value of this option is also interpreted as a regular expression and process/thread names (`ps -eo cmd`) are matched against that expression. Profile rollback allows all matching processes and threads to run on all CPUs and restores the IRQ settings prior to the profile application. + Multiple regular expressions for [option]`ps_whitelist` and [option]`ps_blacklist` options are allowed and separated by `;`. Quoted semicolon `\;` is taken literally. + .Isolate CPUs 2-4 ==== ---- [scheduler] isolated_cores=2-4 ps_blacklist=.*pmd.*;.*PMD.*;^DPDK;.*qemu-kvm.* ---- Isolate CPUs 2-4 while ignoring processes and threads matching `ps_blacklist` regular expressions. ==== The [option]`default_irq_smp_affinity` option controls the values *TuneD* writes to `/proc/irq/default_smp_affinity`. The file specifies default affinity mask that applies to all non-active IRQs. Once an IRQ is allocated/activated its affinity bitmask will be set to the default mask. + The following values are supported: + -- `calc`:: Content of `/proc/irq/default_smp_affinity` will be calculated from the `isolated_cores` parameter. Non-isolated cores are calculated as an inversion of the `isolated_cores`. Then the intersection of the non-isolated cores and the previous content of `/proc/irq/default_smp_affinity` is written to `/proc/irq/default_smp_affinity`. If the intersection is an empty set, then just the non-isolated cores are written to `/proc/irq/default_smp_affinity`. This behavior is the default if the parameter `default_irq_smp_affinity` is omitted. `ignore`:: *TuneD* will not touch `/proc/irq/default_smp_affinity`. explicit cpulist:: The cpulist (such as 1,3-4) is unpacked and written directly to `/proc/irq/default_smp_affinity`. -- + .An explicit CPU list to set the default IRQ smp affinity to CPUs 0 and 2 ==== ---- [scheduler] isolated_cores=1,3 default_irq_smp_affinity=0,2 ---- ==== To adjust scheduling policy, priority and affinity for a group of processes/threads, use the following syntax. + [subs="+quotes,+macros"] ---- group.__groupname__=__rule_prio__:__sched__:__prio__:__affinity__:__regex__ ---- + where `__rule_prio__` defines internal *TuneD* priority of the rule. Rules are sorted based on priority. This is needed for inheritence to be able to reorder previously defined rules. Equal `__rule_prio__` rules should be processed in the order they were defined. However, this is Python interpreter dependant. To disable an inherited rule for `__groupname__` use: + [subs="+quotes,+macros"] ---- group.__groupname__= ---- + `__sched__` must be one of: *`f`* for FIFO, *`b`* for batch, *`r`* for round robin, *`o`* for other, *`*`* do not change. + `__affinity__` is CPU affinity in hexadecimal. Use `*` for no change. + `__prio__` scheduling priority (see `chrt -m`). + `__regex__` is Python regular expression. It is matched against the output of + [subs="+quotes,+macros"] ---- ps -eo cmd ---- + Any given process name may match more than one group. In such a case, the priority and scheduling policy are taken from the last matching `__regex__`. + .Setting scheduling policy and priorities to kernel threads and watchdog ==== ---- [scheduler] group.kthreads=0:*:1:*:\[.*\]$ group.watchdog=0:f:99:*:\[watchdog.*\] ---- ==== + The scheduler plug-in uses perf event loop to catch newly created processes. By default it listens to `perf.RECORD_COMM` and `perf.RECORD_EXIT` events. By setting [option]`perf_process_fork` option to `true`, `perf.RECORD_FORK` events will be also listened to. In other words, child processes created by the `fork()` system call will be processed. Since child processes inherit CPU affinity from their parents, the scheduler plug-in usually does not need to explicitly process these events. As processing perf events can pose a significant CPU overhead, the [option]`perf_process_fork` option parameter is set to `false` by default. Due to this, child processes are not processed by the scheduler plug-in. + The CPU overhead of the scheduler plugin can be mitigated by using the scheduler [option]`runtime` option and setting it to `0`. This will completely disable the dynamic scheduler functionality and the perf events will not be monitored and acted upon. The disadvantage ot this approach is the procees/thread tuning will be done only at profile application. + .Disabling the scheduler dynamic functionality ==== ---- [scheduler] runtime=0 isolated_cores=1,3 ---- ==== + NOTE: For perf events, memory mapped buffer is used. Under heavy load the buffer may overflow. In such cases the `scheduler` plug-in may start missing events and failing to process some newly created processes. Increasing the buffer size may help. The buffer size can be set with the [option]`perf_mmap_pages` option. The value of this parameter has to expressed in powers of 2. If it is not the power of 2, the nearest higher power of 2 value is calculated from it and this calculated value used. If the [option]`perf_mmap_pages` option is omitted, the default kernel value is used. + The scheduler plug-in supports process/thread confinement using cgroups v1. + [option]`cgroup_mount_point` option specifies the path to mount the cgroup filesystem or where *TuneD* expects it to be mounted. If unset, `/sys/fs/cgroup/cpuset` is expected. + If [option]`cgroup_groups_init` option is set to `1` *TuneD* will create (and remove) all cgroups defined with the `cgroup*` options. This is the default behavior. If it is set to `0` the cgroups need to be preset by other means. + If [option]`cgroup_mount_point_init` option is set to `1`, *TuneD* will create (and remove) the cgroup mountpoint. It implies `cgroup_groups_init = 1`. If set to `0` the cgroups mount point needs to be preset by other means. This is the default behavior. + The [option]`cgroup_for_isolated_cores` option is the cgroup name used for the [option]`isolated_cores` option functionality. For example, if a system has 4 CPUs, `isolated_cores=1` means that all processes/threads will be moved to CPUs 0,2-3. The scheduler plug-in will isolate the specified core by writing the calculated CPU affinity to the `cpuset.cpus` control file of the specified cgroup and move all the matching processes/threads to this group. If this option is unset, classic cpuset affinity using `sched_setaffinity()` will be used. + [option]`cgroup.__cgroup_name__` option defines affinities for arbitrary cgroups. Even hierarchic cgroups can be used, but the hieararchy needs to be specified in the correct order. Also *TuneD* does not do any sanity checks here, with the exception that it forces the cgroup to be under [option]`cgroup_mount_point`. + The syntax of the scheduler option starting with `group.` has been augmented to use `cgroup.__cgroup_name__` instead of the hexadecimal `__affinity__`. The matching processes will be moved to the cgroup `__cgroup_name__`. It is also possible to use cgroups which have not been defined by the [option]`cgroup.` option as described above, i.e. cgroups not managed by *TuneD*. + All cgroup names are sanitized by replacing all all dots (`.`) with slashes (`/`). This is to prevent the plug-in from writing outside [option]`cgroup_mount_point`. + .Using cgroups v1 with the scheduler plug-in ==== ---- [scheduler] cgroup_mount_point=/sys/fs/cgroup/cpuset cgroup_mount_point_init=1 cgroup_groups_init=1 cgroup_for_isolated_cores=group cgroup.group1=2 cgroup.group2=0,2 group.ksoftirqd=0:f:2:cgroup.group1:ksoftirqd.* ps_blacklist=ksoftirqd.*;rcuc.*;rcub.*;ktimersoftd.* isolated_cores=1 ---- Cgroup `group1` has the affinity set to CPU 2 and the cgroup `group2` to CPUs 0,2. Given a 4 CPU setup, the [option]`isolated_cores=1` option causes all processes/threads to be moved to CPU cores 0,2-3. Processes/threads that are blacklisted by the [option]`ps_blacklist` regular expression will not be moved. The scheduler plug-in will isolate the specified core by writing the CPU affinity 0,2-3 to the `cpuset.cpus` control file of the `group` and move all the matching processes/threads to this cgroup. ==== Option [option]`cgroup_ps_blacklist` allows excluding processes which belong to the blacklisted cgroups. The regular expression specified by this option is matched against cgroup hierarchies from `/proc/PID/cgroups`. Cgroups v1 hierarchies from `/proc/PID/cgroups` are separated by commas ',' prior to regular expression matching. The following is an example of content against which the regular expression is matched against: `10:hugetlb:/,9:perf_event:/,8:blkio:/` + Multiple regular expressions can be separated by semicolon ';'. The semicolon represents a logical 'or' operator. + .Cgroup-based exclusion of processes from the scheduler ==== ---- [scheduler] isolated_cores=1 cgroup_ps_blacklist=:/daemons\b ---- The scheduler plug-in will move all processes away from core 1 except processes which belong to cgroup '/daemons'. The '\b' is a regular expression metacharacter that matches a word boundary. ---- [scheduler] isolated_cores=1 cgroup_ps_blacklist=\b8:blkio: ---- The scheduler plug-in will exclude all processes which belong to a cgroup with hierarchy-ID 8 and controller-list blkio. ==== Recent kernels moved some `sched_` and `numa_balancing_` kernel run-time parameters from `/proc/sys/kernel`, managed by the `sysctl` utility, to `debugfs`, typically mounted under `/sys/kernel/debug`. TuneD provides an abstraction mechanism for the following parameters via the scheduler plug-in: [option]`sched_min_granularity_ns`, [option]`sched_latency_ns`, [option]`sched_wakeup_granularity_ns`, [option]`sched_tunable_scaling`, [option]`sched_migration_cost_ns`, [option]`sched_nr_migrate`, [option]`numa_balancing_scan_delay_ms`, [option]`numa_balancing_scan_period_min_ms`, [option]`numa_balancing_scan_period_max_ms` and [option]`numa_balancing_scan_size_mb`. Based on the kernel used, TuneD will write the specified value to the correct location. + .Set tasks' "cache hot" value for migration decisions. ==== ---- [scheduler] sched_migration_cost_ns=500000 ---- On the old kernels, this is equivalent to: ---- [sysctl] kernel.sched_migration_cost_ns=500000 ---- that is, value `500000` will be written to `/proc/sys/kernel/sched_migration_cost_ns`. However, on more recent kernels, the value `500000` will be written to `/sys/kernel/debug/sched/migration_cost_ns`. ==== """ def __init__(self, monitor_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, global_cfg, variables): super(SchedulerPlugin, self).__init__(monitor_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, global_cfg, variables) self._has_dynamic_options = True self._daemon = consts.CFG_DEF_DAEMON self._sleep_interval = int(consts.CFG_DEF_SLEEP_INTERVAL) if global_cfg is not None: self._daemon = global_cfg.get_bool(consts.CFG_DAEMON, consts.CFG_DEF_DAEMON) self._sleep_interval = int(global_cfg.get(consts.CFG_SLEEP_INTERVAL, consts.CFG_DEF_SLEEP_INTERVAL)) self._cmd = commands() # helper variable utilized for showing hint only once that the error may be caused by Secure Boot self._secure_boot_hint = None # paths cache for sched_ and numa_ tunings self._sched_knob_paths_cache = {} # default is to whitelist all and blacklist none self._ps_whitelist = ".*" self._ps_blacklist = "" self._cgroup_ps_blacklist_re = "" self._cpus = perf.cpu_map() self._scheduler_storage_key = self._storage_key( command_name = "scheduler") self._irq_storage_key = self._storage_key( command_name = "irq") try: self._scheduler_utils = SchedulerUtils() except AttributeError: self._scheduler_utils = SchedulerUtilsSchedutils() def _calc_mmap_pages(self, mmap_pages): if mmap_pages is None: return None try: mp = int(mmap_pages) except ValueError: return 0 if mp <= 0: return 0 # round up to the nearest power of two value return int(2 ** math.ceil(math.log(mp, 2))) def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True # this is hack, runtime_tuning should be covered by dynamic_tuning configuration # TODO: add per plugin dynamic tuning configuration and use dynamic_tuning configuration # instead of runtime_tuning instance._runtime_tuning = True # FIXME: do we want to do this here? # recover original values in case of crash self._scheduler_original = self._storage.get( self._scheduler_storage_key, {}) if len(self._scheduler_original) > 0: log.info("recovering scheduling settings from previous run") self._restore_ps_affinity() self._scheduler_original = {} self._storage.unset(self._scheduler_storage_key) self._cgroups_original_affinity = dict() # calculated by isolated_cores setter self._affinity = None self._cgroup_affinity_initialized = False self._cgroup = None self._cgroups = collections.OrderedDict([(self._sanitize_cgroup_path(option[7:]), self._variables.expand(affinity)) for option, affinity in instance.options.items() if option[:7] == "cgroup." and len(option) > 7]) instance._scheduler = instance.options perf_mmap_pages_raw = self._variables.expand(instance.options["perf_mmap_pages"]) perf_mmap_pages = self._calc_mmap_pages(perf_mmap_pages_raw) if perf_mmap_pages == 0: log.error("Invalid 'perf_mmap_pages' value specified: '%s', using default kernel value" % perf_mmap_pages_raw) perf_mmap_pages = None if perf_mmap_pages is not None and str(perf_mmap_pages) != perf_mmap_pages_raw: log.info("'perf_mmap_pages' value has to be power of two, specified: '%s', using: '%d'" % (perf_mmap_pages_raw, perf_mmap_pages)) for k in instance._scheduler: instance._scheduler[k] = self._variables.expand(instance._scheduler[k]) if self._cmd.get_bool(instance._scheduler.get("runtime", 1)) == "0": instance._runtime_tuning = False instance._terminate = threading.Event() if self._daemon and instance._runtime_tuning: try: instance._threads = perf.thread_map() evsel = perf.evsel(type = perf.TYPE_SOFTWARE, config = perf.COUNT_SW_DUMMY, task = 1, comm = 1, mmap = 0, freq = 0, wakeup_events = 1, watermark = 1, sample_type = perf.SAMPLE_TID | perf.SAMPLE_CPU) evsel.open(cpus = self._cpus, threads = instance._threads) instance._evlist = perf.evlist(self._cpus, instance._threads) instance._evlist.add(evsel) if perf_mmap_pages is None: instance._evlist.mmap() else: instance._evlist.mmap(pages = perf_mmap_pages) # no perf except: instance._runtime_tuning = False def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "isolated_cores": None, "cgroup_mount_point": consts.DEF_CGROUP_MOUNT_POINT, "cgroup_mount_point_init": False, "cgroup_groups_init": True, "cgroup_for_isolated_cores": None, "cgroup_ps_blacklist": None, "ps_whitelist": None, "ps_blacklist": None, "default_irq_smp_affinity": "calc", "perf_mmap_pages": None, "perf_process_fork": "false", "sched_min_granularity_ns": None, "sched_latency_ns": None, "sched_wakeup_granularity_ns": None, "sched_tunable_scaling": None, "sched_migration_cost_ns": None, "sched_nr_migrate": None, "numa_balancing_scan_delay_ms": None, "numa_balancing_scan_period_min_ms": None, "numa_balancing_scan_period_max_ms": None, "numa_balancing_scan_size_mb": None } def _sanitize_cgroup_path(self, value): return str(value).replace(".", "/") if value is not None else None # Raises OSError, IOError def _get_cmdline(self, process): if not isinstance(process, procfs.process): pid = process process = procfs.process(pid) cmdline = procfs.process_cmdline(process) if self._is_kthread(process): cmdline = "[" + cmdline + "]" return cmdline # Raises OSError, IOError def get_processes(self): ps = procfs.pidstats() ps.reload_threads() processes = {} for proc in ps.values(): try: cmd = self._get_cmdline(proc) pid = proc["pid"] processes[pid] = cmd if "threads" in proc: for pid in proc["threads"].keys(): cmd = self._get_cmdline(proc) processes[pid] = cmd except (OSError, IOError) as e: if e.errno == errno.ENOENT \ or e.errno == errno.ESRCH: continue else: raise return processes # Raises OSError # Raises SystemError with old (pre-0.4) python-schedutils # instead of OSError # If PID doesn't exist, errno == ESRCH def _get_rt(self, pid): scheduler = self._scheduler_utils.get_scheduler(pid) sched_str = self._scheduler_utils.sched_num_to_const(scheduler) priority = self._scheduler_utils.get_priority(pid) log.debug("Read scheduler policy '%s' and priority '%d' of PID '%d'" % (sched_str, priority, pid)) return (scheduler, priority) def _set_rt(self, pid, sched, prio): sched_str = self._scheduler_utils.sched_num_to_const(sched) log.debug("Setting scheduler policy to '%s' and priority to '%d' of PID '%d'." % (sched_str, prio, pid)) try: prio_min = self._scheduler_utils.get_priority_min(sched) prio_max = self._scheduler_utils.get_priority_max(sched) if prio < prio_min or prio > prio_max: log.error("Priority for %s must be in range %d - %d. '%d' was given." % (sched_str, prio_min, prio_max, prio)) # Workaround for old (pre-0.4) python-schedutils which raised # SystemError instead of OSError except (SystemError, OSError) as e: log.error("Failed to get allowed priority range: %s" % e) try: self._scheduler_utils.set_scheduler(pid, sched, prio) except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to set scheduling parameters of PID %d, the task vanished." % pid) else: log.error("Failed to set scheduling parameters of PID %d: %s" % (pid, e)) # process is a procfs.process object # Raises OSError, IOError def _is_kthread(self, process): return process["stat"]["flags"] & procfs.pidstat.PF_KTHREAD != 0 # Return codes: # 0 - Affinity is fixed # 1 - Affinity is changeable # -1 - Task vanished # -2 - Error def _affinity_changeable(self, pid): try: process = procfs.process(pid) if process["stat"].is_bound_to_cpu(): if process["stat"]["state"] == "Z": log.debug("Affinity of zombie task with PID %d cannot be changed, the task's affinity mask is fixed." % pid) elif self._is_kthread(process): log.debug("Affinity of kernel thread with PID %d cannot be changed, the task's affinity mask is fixed." % pid) else: log.warn("Affinity of task with PID %d cannot be changed, the task's affinity mask is fixed." % pid) return 0 else: return 1 except (OSError, IOError) as e: if e.errno == errno.ENOENT or e.errno == errno.ESRCH: log.debug("Failed to get task info for PID %d, the task vanished." % pid) return -1 else: log.error("Failed to get task info for PID %d: %s" % (pid, e)) return -2 except (AttributeError, KeyError) as e: log.error("Failed to get task info for PID %d: %s" % (pid, e)) return -2 def _store_orig_process_rt(self, pid, scheduler, priority): try: params = self._scheduler_original[pid] except KeyError: params = SchedulerParams(self._cmd) self._scheduler_original[pid] = params if params.scheduler is None and params.priority is None: params.scheduler = scheduler params.priority = priority def _tune_process_rt(self, pid, sched, prio): cont = True if sched is None and prio is None: return cont try: (prev_sched, prev_prio) = self._get_rt(pid) if sched is None: sched = prev_sched self._set_rt(pid, sched, prio) self._store_orig_process_rt(pid, prev_sched, prev_prio) except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to read scheduler policy of PID %d, the task vanished." % pid) if pid in self._scheduler_original: del self._scheduler_original[pid] cont = False else: log.error("Refusing to set scheduler and priority of PID %d, reading original scheduling parameters failed: %s" % (pid, e)) return cont def _is_cgroup_affinity(self, affinity): return str(affinity)[:7] == "cgroup." def _store_orig_process_affinity(self, pid, affinity, is_cgroup = False): try: params = self._scheduler_original[pid] except KeyError: params = SchedulerParams(self._cmd) self._scheduler_original[pid] = params if params.affinity is None and params.cgroup is None: if is_cgroup: params.cgroup = affinity else: params.affinity = affinity def _get_cgroup_affinity(self, pid): # we cannot use procfs, because it uses comma ',' delimiter which # can be ambiguous for l in self._cmd.read_file("%s/%s/%s" % (consts.PROCFS_MOUNT_POINT, str(pid), "cgroup"), no_error = True).split("\n"): try: cgroup = l.split(":cpuset:")[1][1:] return cgroup if cgroup != "" else "/" except IndexError: pass return "/" # it can be arbitrary cgroup even cgroup we didn't set, but it needs to be # under "cgroup_mount_point" def _set_cgroup(self, pid, cgroup): cgroup = self._sanitize_cgroup_path(cgroup) path = self._cgroup_mount_point if cgroup != "/": path = "%s/%s" % (path, cgroup) self._cmd.write_to_file("%s/tasks" % path, str(pid), no_error = True) def _parse_cgroup_affinity(self, cgroup): # "cgroup.CGROUP" cgroup = cgroup[7:] # this should be faster than string comparison is_cgroup = not isinstance(cgroup, list) and len(cgroup) > 0 return is_cgroup, cgroup def _tune_process_affinity(self, pid, affinity, intersect = False): cont = True if affinity is None: return cont try: (is_cgroup, cgroup) = self._parse_cgroup_affinity(affinity) if is_cgroup: prev_affinity = self._get_cgroup_affinity(pid) self._set_cgroup(pid, cgroup) else: prev_affinity = self._get_affinity(pid) if intersect: affinity = self._get_intersect_affinity( prev_affinity, affinity, affinity) self._set_affinity(pid, affinity) self._store_orig_process_affinity(pid, prev_affinity, is_cgroup) except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to read affinity of PID %d, the task vanished." % pid) if pid in self._scheduler_original: del self._scheduler_original[pid] cont = False else: log.error("Refusing to set CPU affinity of PID %d, reading original affinity failed: %s" % (pid, e)) return cont #tune process and store previous values def _tune_process(self, pid, cmd, sched, prio, affinity): cont = self._tune_process_rt(pid, sched, prio) if not cont: return cont = self._tune_process_affinity(pid, affinity) if not cont or pid not in self._scheduler_original: return self._scheduler_original[pid].cmdline = cmd def _convert_sched_params(self, str_scheduler, str_priority): scheduler = self._scheduler_utils.sched_cfg_to_num(str_scheduler) if scheduler is None and str_scheduler != "*": log.error("Invalid scheduler: %s. Scheduler and priority will be ignored." % str_scheduler) return (None, None) else: try: priority = int(str_priority) except ValueError: log.error("Invalid priority: %s. Scheduler and priority will be ignored." % str_priority) return (None, None) return (scheduler, priority) def _convert_affinity(self, str_affinity): if str_affinity == "*": affinity = None elif self._is_cgroup_affinity(str_affinity): affinity = str_affinity else: affinity = self._cmd.hex2cpulist(str_affinity) if not affinity: log.error("Invalid affinity: %s. It will be ignored." % str_affinity) affinity = None return affinity def _convert_sched_cfg(self, vals): (rule_prio, scheduler, priority, affinity, regex) = vals (scheduler, priority) = self._convert_sched_params( scheduler, priority) affinity = self._convert_affinity(affinity) return (rule_prio, scheduler, priority, affinity, regex) def _cgroup_create_group(self, cgroup): path = "%s/%s" % (self._cgroup_mount_point, cgroup) try: os.mkdir(path, consts.DEF_CGROUP_MODE) except OSError as e: log.error("Unable to create cgroup '%s': %s" % (path, e)) if (not self._cmd.write_to_file("%s/%s" % (path, "cpuset.mems"), self._cmd.read_file("%s/%s" % (self._cgroup_mount_point, "cpuset.mems"), no_error = True), no_error = True)): log.error("Unable to initialize 'cpuset.mems ' for cgroup '%s'" % path) def _cgroup_initialize_groups(self): if self._cgroup is not None and not self._cgroup in self._cgroups: self._cgroup_create_group(self._cgroup) for cg in self._cgroups: self._cgroup_create_group(cg) def _cgroup_initialize(self): log.debug("Initializing cgroups settings") try: os.makedirs(self._cgroup_mount_point, consts.DEF_CGROUP_MODE) except OSError as e: log.error("Unable to create cgroup mount point: %s" % e) (ret, out) = self._cmd.execute(["mount", "-t", "cgroup", "-o", "cpuset", "cpuset", self._cgroup_mount_point]) if ret != 0: log.error("Unable to mount '%s'" % self._cgroup_mount_point) def _remove_dir(self, cgroup): try: os.rmdir(cgroup) except OSError as e: log.error("Unable to remove directory '%s': %s" % (cgroup, e)) def _cgroup_finalize_groups(self): for cg in reversed(self._cgroups): self._remove_dir("%s/%s" % (self._cgroup_mount_point, cg)) if self._cgroup is not None and not self._cgroup in self._cgroups: self._remove_dir("%s/%s" % (self._cgroup_mount_point, self._cgroup)) def _cgroup_finalize(self): log.debug("Removing cgroups settings") (ret, out) = self._cmd.execute(["umount", self._cgroup_mount_point]) if ret != 0: log.error("Unable to umount '%s'" % self._cgroup_mount_point) return False self._remove_dir(self._cgroup_mount_point) d = os.path.dirname(self._cgroup_mount_point) if (d != "/"): self._remove_dir(d) def _cgroup_set_affinity_one(self, cgroup, affinity, backup = False): if affinity != "": log.debug("Setting cgroup '%s' affinity to '%s'" % (cgroup, affinity)) else: log.debug("Skipping cgroup '%s', empty affinity requested" % cgroup) return path = "%s/%s/%s" % (self._cgroup_mount_point, cgroup, "cpuset.cpus") if backup: orig_affinity = self._cmd.read_file(path, err_ret = "ERR", no_error = True).strip() if orig_affinity != "ERR": self._cgroups_original_affinity[cgroup] = orig_affinity else: log.err("Refusing to set affinity of cgroup '%s', reading original affinity failed" % cgroup) return if not self._cmd.write_to_file(path, affinity, no_error = True): log.error("Unable to set affinity '%s' for cgroup '%s'" % (affinity, cgroup)) def _cgroup_set_affinity(self): if self._cgroup_affinity_initialized: return log.debug("Setting cgroups affinities") if self._affinity is not None and self._cgroup is not None and not self._cgroup in self._cgroups: self._cgroup_set_affinity_one(self._cgroup, self._affinity, backup = True) for cg in self._cgroups.items(): self._cgroup_set_affinity_one(cg[0], cg[1], backup = True) self._cgroup_affinity_initialized = True def _cgroup_restore_affinity(self): log.debug("Restoring cgroups affinities") for cg in self._cgroups_original_affinity.items(): self._cgroup_set_affinity_one(cg[0], cg[1]) def _instance_apply_static(self, instance): # need to get "cgroup_mount_point_init", "cgroup_mount_point", "cgroup_groups_init", # "cgroup", and initialize mount point and cgroups before super class implementation call self._cgroup_mount_point = self._variables.expand(instance.options["cgroup_mount_point"]) self._cgroup_mount_point_init = self._cmd.get_bool(self._variables.expand( instance.options["cgroup_mount_point_init"])) == "1" self._cgroup_groups_init = self._cmd.get_bool(self._variables.expand( instance.options["cgroup_groups_init"])) == "1" self._cgroup = self._sanitize_cgroup_path(self._variables.expand( instance.options["cgroup_for_isolated_cores"])) if self._cgroup_mount_point_init: self._cgroup_initialize() if self._cgroup_groups_init or self._cgroup_mount_point_init: self._cgroup_initialize_groups() super(SchedulerPlugin, self)._instance_apply_static(instance) self._cgroup_set_affinity() try: ps = self.get_processes() except (OSError, IOError) as e: log.error("error applying tuning, cannot get information about running processes: %s" % e) return sched_cfg = [(option, str(value).split(":", 4)) for option, value in instance._scheduler.items()] buf = [(option, self._convert_sched_cfg(vals)) for option, vals in sched_cfg if re.match(r"group\.", option) and len(vals) == 5] sched_cfg = sorted(buf, key=lambda option_vals: option_vals[1][0]) sched_all = dict() # for runtime tuning instance._sched_lookup = {} for option, (rule_prio, scheduler, priority, affinity, regex) \ in sched_cfg: try: r = re.compile(regex) except re.error as e: log.error("error compiling regular expression: '%s'" % str(regex)) continue processes = [(pid, cmd) for pid, cmd in ps.items() if re.search(r, cmd) is not None] #cmd - process name, option - group name sched = dict([(pid, (cmd, option, scheduler, priority, affinity, regex)) for pid, cmd in processes]) sched_all.update(sched) # make any contained regexes non-capturing: replace "(" with "(?:", # unless the "(" is preceded by "\" or followed by "?" regex = re.sub(r"(?<!\\)\((?!\?)", "(?:", str(regex)) instance._sched_lookup[regex] = [scheduler, priority, affinity] for pid, (cmd, option, scheduler, priority, affinity, regex) \ in sched_all.items(): self._tune_process(pid, cmd, scheduler, priority, affinity) self._storage.set(self._scheduler_storage_key, self._scheduler_original) if self._daemon and instance._runtime_tuning: instance._thread = threading.Thread(target = self._thread_code, args = [instance]) instance._thread.start() def _restore_ps_affinity(self): try: ps = self.get_processes() except (OSError, IOError) as e: log.error("error unapplying tuning, cannot get information about running processes: %s" % e) return for pid, orig_params in self._scheduler_original.items(): # if command line for the pid didn't change, it's very probably the same process if pid not in ps or ps[pid] != orig_params.cmdline: continue if orig_params.scheduler is not None \ and orig_params.priority is not None: self._set_rt(pid, orig_params.scheduler, orig_params.priority) if orig_params.cgroup is not None: self._set_cgroup(pid, orig_params.cgroup) elif orig_params.affinity is not None: self._set_affinity(pid, orig_params.affinity) self._scheduler_original = {} self._storage.unset(self._scheduler_storage_key) def _cgroup_cleanup_tasks_one(self, cgroup): cnt = int(consts.CGROUP_CLEANUP_TASKS_RETRY) data = " " while data != "" and cnt > 0: data = self._cmd.read_file("%s/%s/%s" % (self._cgroup_mount_point, cgroup, "tasks"), err_ret = " ", no_error = True) if data not in ["", " "]: for l in data.split("\n"): self._cmd.write_to_file("%s/%s" % (self._cgroup_mount_point, "tasks"), l, no_error = True) cnt -= 1 if cnt == 0: log.warn("Unable to cleanup tasks from cgroup '%s'" % cgroup) def _cgroup_cleanup_tasks(self): if self._cgroup is not None and not self._cgroup in self._cgroups: self._cgroup_cleanup_tasks_one(self._cgroup) for cg in self._cgroups: self._cgroup_cleanup_tasks_one(cg) def _instance_unapply_static(self, instance, full_rollback = False): super(SchedulerPlugin, self)._instance_unapply_static(instance, full_rollback) if self._daemon and instance._runtime_tuning: instance._terminate.set() instance._thread.join() self._restore_ps_affinity() self._cgroup_restore_affinity() self._cgroup_cleanup_tasks() if self._cgroup_groups_init or self._cgroup_mount_point_init: self._cgroup_finalize_groups() if self._cgroup_mount_point_init: self._cgroup_finalize() def _cgroup_verify_affinity_one(self, cgroup, affinity): log.debug("Verifying cgroup '%s' affinity" % cgroup) path = "%s/%s/%s" % (self._cgroup_mount_point, cgroup, "cpuset.cpus") current_affinity = self._cmd.read_file(path, err_ret = "ERR", no_error = True) if current_affinity == "ERR": return True current_affinity = self._cmd.cpulist2string(self._cmd.cpulist_pack(current_affinity)) affinity = self._cmd.cpulist2string(self._cmd.cpulist_pack(affinity)) affinity_description = "cgroup '%s' affinity" % cgroup if current_affinity == affinity: log.info(consts.STR_VERIFY_PROFILE_VALUE_OK % (affinity_description, current_affinity)) return True else: log.error(consts.STR_VERIFY_PROFILE_VALUE_FAIL % (affinity_description, current_affinity, affinity)) return False def _cgroup_verify_affinity(self): log.debug("Veryfying cgroups affinities") ret = True if self._affinity is not None and self._cgroup is not None and not self._cgroup in self._cgroups: ret = ret and self._cgroup_verify_affinity_one(self._cgroup, self._affinity) for cg in self._cgroups.items(): ret = ret and self._cgroup_verify_affinity_one(cg[0], cg[1]) return ret def _instance_verify_static(self, instance, ignore_missing, devices): ret1 = super(SchedulerPlugin, self)._instance_verify_static(instance, ignore_missing, devices) ret2 = self._cgroup_verify_affinity() return ret1 and ret2 def _add_pid(self, instance, pid, r): try: cmd = self._get_cmdline(pid) except (OSError, IOError) as e: if e.errno == errno.ENOENT \ or e.errno == errno.ESRCH: log.debug("Failed to get cmdline of PID %d, the task vanished." % pid) else: log.error("Failed to get cmdline of PID %d: %s" % (pid, e)) return v = self._cmd.re_lookup(instance._sched_lookup, cmd, r) if v is not None and not pid in self._scheduler_original: log.debug("tuning new process '%s' with PID '%d' by '%s'" % (cmd, pid, str(v))) (sched, prio, affinity) = v self._tune_process(pid, cmd, sched, prio, affinity) self._storage.set(self._scheduler_storage_key, self._scheduler_original) def _remove_pid(self, instance, pid): if pid in self._scheduler_original: del self._scheduler_original[pid] log.debug("removed PID %d from the rollback database" % pid) self._storage.set(self._scheduler_storage_key, self._scheduler_original) def _thread_code(self, instance): r = self._cmd.re_lookup_compile(instance._sched_lookup) poll = select.poll() # Store the file objects in a local variable so that they don't # go out of scope too soon. This is a workaround for # python3-perf bug rhbz#1659445. fds = instance._evlist.get_pollfd() for fd in fds: poll.register(fd) while not instance._terminate.is_set(): # timeout to poll in milliseconds if len(poll.poll(self._sleep_interval * 1000)) > 0 and not instance._terminate.is_set(): read_events = True while read_events: read_events = False for cpu in self._cpus: event = instance._evlist.read_on_cpu(cpu) if event: read_events = True if event.type == perf.RECORD_COMM or \ (self._perf_process_fork_value and event.type == perf.RECORD_FORK): self._add_pid(instance, int(event.tid), r) elif event.type == perf.RECORD_EXIT: self._remove_pid(instance, int(event.tid)) @command_custom("cgroup_ps_blacklist", per_device = False) def _cgroup_ps_blacklist(self, enabling, value, verify, ignore_missing): # currently unsupported if verify: return None if enabling and value is not None: self._cgroup_ps_blacklist_re = "|".join(["(%s)" % v for v in re.split(r"(?<!\\);", str(value))]) @command_custom("ps_whitelist", per_device = False) def _ps_whitelist(self, enabling, value, verify, ignore_missing): # currently unsupported if verify: return None if enabling and value is not None: self._ps_whitelist = "|".join(["(%s)" % v for v in re.split(r"(?<!\\);", str(value))]) @command_custom("ps_blacklist", per_device = False) def _ps_blacklist(self, enabling, value, verify, ignore_missing): # currently unsupported if verify: return None if enabling and value is not None: self._ps_blacklist = "|".join(["(%s)" % v for v in re.split(r"(?<!\\);", str(value))]) @command_custom("default_irq_smp_affinity", per_device = False) def _default_irq_smp_affinity(self, enabling, value, verify, ignore_missing): # currently unsupported if verify: return None if enabling and value is not None: if value in ["calc", "ignore"]: self._default_irq_smp_affinity_value = value else: self._default_irq_smp_affinity_value = self._cmd.cpulist_unpack(value) @command_custom("perf_process_fork", per_device = False) def _perf_process_fork(self, enabling, value, verify, ignore_missing): # currently unsupported if verify: return None if enabling and value is not None: self._perf_process_fork_value = self._cmd.get_bool(value) == "1" # Raises OSError # Raises SystemError with old (pre-0.4) python-schedutils # instead of OSError # If PID doesn't exist, errno == ESRCH def _get_affinity(self, pid): res = self._scheduler_utils.get_affinity(pid) log.debug("Read affinity '%s' of PID %d" % (res, pid)) return res def _set_affinity(self, pid, affinity): log.debug("Setting CPU affinity of PID %d to '%s'." % (pid, affinity)) try: self._scheduler_utils.set_affinity(pid, affinity) return True # Workaround for old python-schedutils (pre-0.4) which # incorrectly raised SystemError instead of OSError except (SystemError, OSError) as e: if hasattr(e, "errno") and e.errno == errno.ESRCH: log.debug("Failed to set affinity of PID %d, the task vanished." % pid) else: res = self._affinity_changeable(pid) if res == 1 or res == -2: log.error("Failed to set affinity of PID %d to '%s': %s" % (pid, affinity, e)) return False # returns intersection of affinity1 with affinity2, if intersection is empty it returns affinity3 def _get_intersect_affinity(self, affinity1, affinity2, affinity3): aff = set(affinity1).intersection(set(affinity2)) if aff: return list(aff) return affinity3 def _set_all_obj_affinity(self, objs, affinity, threads = False): psl = [v for v in objs if re.search(self._ps_whitelist, self._get_stat_comm(v)) is not None] if self._ps_blacklist != "": psl = [v for v in psl if re.search(self._ps_blacklist, self._get_stat_comm(v)) is None] if self._cgroup_ps_blacklist_re != "": psl = [v for v in psl if re.search(self._cgroup_ps_blacklist_re, self._get_stat_cgroup(v)) is None] psd = dict([(v.pid, v) for v in psl]) for pid in psd: try: cmd = self._get_cmdline(psd[pid]) except (OSError, IOError) as e: if e.errno == errno.ENOENT \ or e.errno == errno.ESRCH: log.debug("Failed to get cmdline of PID %d, the task vanished." % pid) else: log.error("Refusing to set affinity of PID %d, failed to get its cmdline: %s" % (pid, e)) continue cont = self._tune_process_affinity(pid, affinity, intersect = True) if not cont: continue if pid in self._scheduler_original: self._scheduler_original[pid].cmdline = cmd # process threads if not threads and "threads" in psd[pid]: self._set_all_obj_affinity( psd[pid]["threads"].values(), affinity, True) def _get_stat_cgroup(self, o): try: return o["cgroups"] except (OSError, IOError, KeyError): return "" def _get_stat_comm(self, o): try: return o["stat"]["comm"] except (OSError, IOError, KeyError): return "" def _set_ps_affinity(self, affinity): try: ps = procfs.pidstats() ps.reload_threads() self._set_all_obj_affinity(ps.values(), affinity, False) except (OSError, IOError) as e: log.error("error applying tuning, cannot get information about running processes: %s" % e) # Returns 0 on success, -2 if changing the affinity is not # supported, -1 if some other error occurs. def _set_irq_affinity(self, irq, affinity, restoring): try: affinity_hex = self._cmd.cpulist2hex(affinity) log.debug("Setting SMP affinity of IRQ %s to '%s'" % (irq, affinity_hex)) filename = "/proc/irq/%s/smp_affinity" % irq with open(filename, "w") as f: f.write(affinity_hex) return 0 except (OSError, IOError) as e: # EIO is returned by # kernel/irq/proc.c:write_irq_affinity() if changing # the affinity is not supported # (at least on kernels 3.10 and 4.18) if hasattr(e, "errno") and e.errno == errno.EIO \ and not restoring: log.debug("Setting SMP affinity of IRQ %s is not supported" % irq) return -2 else: log.error("Failed to set SMP affinity of IRQ %s to '%s': %s" % (irq, affinity_hex, e)) return -1 def _set_default_irq_affinity(self, affinity): try: affinity_hex = self._cmd.cpulist2hex(affinity) log.debug("Setting default SMP IRQ affinity to '%s'" % affinity_hex) with open("/proc/irq/default_smp_affinity", "w") as f: f.write(affinity_hex) except (OSError, IOError) as e: log.error("Failed to set default SMP IRQ affinity to '%s': %s" % (affinity_hex, e)) def _set_all_irq_affinity(self, affinity): irq_original = IRQAffinities() irqs = procfs.interrupts() for irq in irqs.keys(): try: prev_affinity = irqs[irq]["affinity"] log.debug("Read affinity of IRQ '%s': '%s'" % (irq, prev_affinity)) except KeyError: continue _affinity = self._get_intersect_affinity(prev_affinity, affinity, affinity) if set(_affinity) == set(prev_affinity): continue res = self._set_irq_affinity(irq, _affinity, False) if res == 0: irq_original.irqs[irq] = prev_affinity elif res == -2: irq_original.unchangeable.append(irq) # default affinity prev_affinity_hex = self._cmd.read_file("/proc/irq/default_smp_affinity") prev_affinity = self._cmd.hex2cpulist(prev_affinity_hex) if self._default_irq_smp_affinity_value == "calc": _affinity = self._get_intersect_affinity(prev_affinity, affinity, affinity) elif self._default_irq_smp_affinity_value != "ignore": _affinity = self._default_irq_smp_affinity_value if self._default_irq_smp_affinity_value != "ignore": self._set_default_irq_affinity(_affinity) irq_original.default = prev_affinity self._storage.set(self._irq_storage_key, irq_original) def _restore_all_irq_affinity(self): irq_original = self._storage.get(self._irq_storage_key, None) if irq_original is None: return for irq, affinity in irq_original.irqs.items(): self._set_irq_affinity(irq, affinity, True) if self._default_irq_smp_affinity_value != "ignore": affinity = irq_original.default self._set_default_irq_affinity(affinity) self._storage.unset(self._irq_storage_key) def _verify_irq_affinity(self, irq_description, correct_affinity, current_affinity): res = set(current_affinity).issubset(set(correct_affinity)) if res: log.info(consts.STR_VERIFY_PROFILE_VALUE_OK % (irq_description, current_affinity)) else: log.error(consts.STR_VERIFY_PROFILE_VALUE_FAIL % (irq_description, current_affinity, correct_affinity)) return res def _verify_all_irq_affinity(self, correct_affinity, ignore_missing): irq_original = self._storage.get(self._irq_storage_key, None) irqs = procfs.interrupts() res = True for irq in irqs.keys(): if irq in irq_original.unchangeable and ignore_missing: description = "IRQ %s does not support changing SMP affinity" % irq log.info(consts.STR_VERIFY_PROFILE_VALUE_MISSING % description) continue try: current_affinity = irqs[irq]["affinity"] log.debug("Read SMP affinity of IRQ '%s': '%s'" % (irq, current_affinity)) irq_description = "SMP affinity of IRQ %s" % irq if not self._verify_irq_affinity( irq_description, correct_affinity, current_affinity): res = False except KeyError: continue current_affinity_hex = self._cmd.read_file( "/proc/irq/default_smp_affinity") current_affinity = self._cmd.hex2cpulist(current_affinity_hex) if self._default_irq_smp_affinity_value != "ignore" and not self._verify_irq_affinity("default IRQ SMP affinity", current_affinity, correct_affinity if self._default_irq_smp_affinity_value == "calc" else self._default_irq_smp_affinity_value): res = False return res @command_custom("isolated_cores", per_device = False, priority = 10) def _isolated_cores(self, enabling, value, verify, ignore_missing): affinity = None self._affinity = None if value is not None: isolated = set(self._cmd.cpulist_unpack(value)) present = set(self._cpus) if isolated.issubset(present): affinity = list(present - isolated) self._affinity = self._cmd.cpulist2string(affinity) else: str_cpus = self._cmd.cpulist2string(self._cpus) log.error("Invalid isolated_cores specified, '%s' does not match available cores '%s'" % (value, str_cpus)) if (enabling or verify) and affinity is None: return None # currently only IRQ affinity verification is supported if verify: return self._verify_all_irq_affinity(affinity, ignore_missing) elif enabling: if self._cgroup: self._cgroup_set_affinity() ps_affinity = "cgroup.%s" % self._cgroup else: ps_affinity = affinity self._set_ps_affinity(ps_affinity) self._set_all_irq_affinity(affinity) else: # Restoring processes' affinity is done in # _instance_unapply_static() self._restore_all_irq_affinity() def _get_sched_knob_path(self, prefix, namespace, knob): key = "%s_%s_%s" % (prefix, namespace, knob) path = self._sched_knob_paths_cache.get(key) if path: return path path = "/proc/sys/kernel/%s_%s" % (namespace, knob) if not os.path.exists(path): if prefix == "": path = "%s/%s" % (namespace, knob) else: path = "%s/%s/%s" % (prefix, namespace, knob) path = "/sys/kernel/debug/%s" % path if self._secure_boot_hint is None: self._secure_boot_hint = True self._sched_knob_paths_cache[key] = path return path def _get_sched_knob(self, prefix, namespace, knob): data = self._cmd.read_file(self._get_sched_knob_path(prefix, namespace, knob), err_ret = None) if data is None: log.error("Error reading '%s'" % knob) if self._secure_boot_hint: log.error("This may not work with Secure Boot or kernel_lockdown (this hint is logged only once)") self._secure_boot_hint = False return data def _set_sched_knob(self, prefix, namespace, knob, value, sim): if value is None: return None if not sim: if not self._cmd.write_to_file(self._get_sched_knob_path(prefix, namespace, knob), value): log.error("Error writing value '%s' to '%s'" % (value, knob)) return value @command_get("sched_min_granularity_ns") def _get_sched_min_granularity_ns(self): return self._get_sched_knob("", "sched", "min_granularity_ns") @command_set("sched_min_granularity_ns") def _set_sched_min_granularity_ns(self, value, sim): return self._set_sched_knob("", "sched", "min_granularity_ns", value, sim) @command_get("sched_latency_ns") def _get_sched_latency_ns(self): return self._get_sched_knob("", "sched", "latency_ns") @command_set("sched_latency_ns") def _set_sched_latency_ns(self, value, sim): return self._set_sched_knob("", "sched", "latency_ns", value, sim) @command_get("sched_wakeup_granularity_ns") def _get_sched_wakeup_granularity_ns(self): return self._get_sched_knob("", "sched", "wakeup_granularity_ns") @command_set("sched_wakeup_granularity_ns") def _set_sched_wakeup_granularity_ns(self, value, sim): return self._set_sched_knob("", "sched", "wakeup_granularity_ns", value, sim) @command_get("sched_tunable_scaling") def _get_sched_tunable_scaling(self): return self._get_sched_knob("", "sched", "tunable_scaling") @command_set("sched_tunable_scaling") def _set_sched_tunable_scaling(self, value, sim): return self._set_sched_knob("", "sched", "tunable_scaling", value, sim) @command_get("sched_migration_cost_ns") def _get_sched_migration_cost_ns(self): return self._get_sched_knob("", "sched", "migration_cost_ns") @command_set("sched_migration_cost_ns") def _set_sched_migration_cost_ns(self, value, sim): return self._set_sched_knob("", "sched", "migration_cost_ns", value, sim) @command_get("sched_nr_migrate") def _get_sched_nr_migrate(self): return self._get_sched_knob("", "sched", "nr_migrate") @command_set("sched_nr_migrate") def _set_sched_nr_migrate(self, value, sim): return self._set_sched_knob("", "sched", "nr_migrate", value, sim) @command_get("numa_balancing_scan_delay_ms") def _get_numa_balancing_scan_delay_ms(self): return self._get_sched_knob("sched", "numa_balancing", "scan_delay_ms") @command_set("numa_balancing_scan_delay_ms") def _set_numa_balancing_scan_delay_ms(self, value, sim): return self._set_sched_knob("sched", "numa_balancing", "scan_delay_ms", value, sim) @command_get("numa_balancing_scan_period_min_ms") def _get_numa_balancing_scan_period_min_ms(self): return self._get_sched_knob("sched", "numa_balancing", "scan_period_min_ms") @command_set("numa_balancing_scan_period_min_ms") def _set_numa_balancing_scan_period_min_ms(self, value, sim): return self._set_sched_knob("sched", "numa_balancing", "scan_period_min_ms", value, sim) @command_get("numa_balancing_scan_period_max_ms") def _get_numa_balancing_scan_period_max_ms(self): return self._get_sched_knob("sched", "numa_balancing", "scan_period_max_ms") @command_set("numa_balancing_scan_period_max_ms") def _set_numa_balancing_scan_period_max_ms(self, value, sim): return self._set_sched_knob("sched", "numa_balancing", "scan_period_max_ms", value, sim) @command_get("numa_balancing_scan_size_mb") def _get_numa_balancing_scan_size_mb(self): return self._get_sched_knob("sched", "numa_balancing", "scan_size_mb") @command_set("numa_balancing_scan_size_mb") def _set_numa_balancing_scan_size_mb(self, value, sim): return self._set_sched_knob("sched", "numa_balancing", "scan_size_mb", value, sim) 07070100000153000081A40000000000000000000000016391BC3A00000EF4000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_script.pyimport tuned.consts as consts from . import base import tuned.logs import os from subprocess import Popen, PIPE log = tuned.logs.get() class ScriptPlugin(base.Plugin): """ `script`:: Executes an external script or binary when the profile is loaded or unloaded. You can choose an arbitrary executable. + IMPORTANT: The `script` plug-in is provided mainly for compatibility with earlier releases. Prefer other *TuneD* plug-ins if they cover the required functionality. + *TuneD* calls the executable with one of the following arguments: + -- ** `start` when loading the profile ** `stop` when unloading the profile -- + You need to correctly implement the `stop` action in your executable and revert all settings that you changed during the `start` action. Otherwise, the roll-back step after changing your *TuneD* profile will not work. + Bash scripts can import the [filename]`/usr/lib/tuned/functions` Bash library and use the functions defined there. Use these functions only for functionality that is not natively provided by *TuneD*. If a function name starts with an underscore, such as `_wifi_set_power_level`, consider the function private and do not use it in your scripts, because it might change in the future. + Specify the path to the executable using the `script` parameter in the plug-in configuration. + .Running a Bash script from a profile ==== To run a Bash script named `script.sh` that is located in the profile directory, use: ---- [script] script=${i:PROFILE_DIR}/script.sh ---- ==== """ @classmethod def _get_config_options(self): return { "script" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False if instance.options["script"] is not None: # FIXME: this hack originated from profiles merger assert isinstance(instance.options["script"], list) instance._scripts = instance.options["script"] else: instance._scripts = [] def _instance_cleanup(self, instance): pass def _call_scripts(self, scripts, arguments): ret = True for script in scripts: environ = os.environ environ.update(self._variables.get_env()) log.info("calling script '%s' with arguments '%s'" % (script, str(arguments))) log.debug("using environment '%s'" % str(list(environ.items()))) try: proc = Popen([script] + arguments, \ stdout=PIPE, stderr=PIPE, \ close_fds=True, env=environ, \ universal_newlines = True, \ cwd = os.path.dirname(script)) out, err = proc.communicate() if len(err): log.error("script '%s' error output: '%s'" % (script, err[:-1])) if proc.returncode: log.error("script '%s' returned error code: %d" % (script, proc.returncode)) ret = False except (OSError,IOError) as e: log.error("script '%s' error: %s" % (script, e)) ret = False return ret def _instance_apply_static(self, instance): super(ScriptPlugin, self)._instance_apply_static(instance) self._call_scripts(instance._scripts, ["start"]) def _instance_verify_static(self, instance, ignore_missing, devices): ret = True if super(ScriptPlugin, self)._instance_verify_static(instance, ignore_missing, devices) == False: ret = False args = ["verify"] if ignore_missing: args += ["ignore_missing"] if self._call_scripts(instance._scripts, args) == True: log.info(consts.STR_VERIFY_PROFILE_OK % instance._scripts) else: log.error(consts.STR_VERIFY_PROFILE_FAIL % instance._scripts) ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): args = ["stop"] if full_rollback: args = args + ["full_rollback"] self._call_scripts(reversed(instance._scripts), args) super(ScriptPlugin, self)._instance_unapply_static(instance, full_rollback) 07070100000154000081A40000000000000000000000016391BC3A00000C12000000000000000000000000000000000000003E00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_scsi_host.pyimport errno from . import hotplug from .decorators import * import tuned.logs import tuned.consts as consts from tuned.utils.commands import commands import os import re log = tuned.logs.get() class SCSIHostPlugin(hotplug.Plugin): """ `scsi_host`:: Tunes options for SCSI hosts. + The plug-in sets Aggressive Link Power Management (ALPM) to the value specified by the [option]`alpm` option. The option takes one of three values: `min_power`, `medium_power` and `max_performance`. + NOTE: ALPM is only available on SATA controllers that use the Advanced Host Controller Interface (AHCI). + .ALPM setting when extended periods of idle time are expected ==== ---- [scsi_host] alpm=min_power ---- ==== """ def __init__(self, *args, **kwargs): super(SCSIHostPlugin, self).__init__(*args, **kwargs) self._cmd = commands() def _init_devices(self): super(SCSIHostPlugin, self)._init_devices() self._devices_supported = True self._free_devices = set() for device in self._hardware_inventory.get_devices("scsi"): if self._device_is_supported(device): self._free_devices.add(device.sys_name) self._assigned_devices = set() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("scsi", x) for x in devices] @classmethod def _device_is_supported(cls, device): return device.device_type == "scsi_host" def _hardware_events_init(self): self._hardware_inventory.subscribe(self, "scsi", self._hardware_events_callback) def _hardware_events_cleanup(self): self._hardware_inventory.unsubscribe(self) def _hardware_events_callback(self, event, device): if self._device_is_supported(device): super(SCSIHostPlugin, self)._hardware_events_callback(event, device) def _added_device_apply_tuning(self, instance, device_name): super(SCSIHostPlugin, self)._added_device_apply_tuning(instance, device_name) def _removed_device_unapply_tuning(self, instance, device_name): super(SCSIHostPlugin, self)._removed_device_unapply_tuning(instance, device_name) @classmethod def _get_config_options(cls): return { "alpm" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass def _get_alpm_policy_file(self, device): return os.path.join("/sys/class/scsi_host/", str(device), "link_power_management_policy") @command_set("alpm", per_device = True) def _set_alpm(self, policy, device, sim): if policy is None: return None policy_file = self._get_alpm_policy_file(device) if not sim: if os.path.exists(policy_file): self._cmd.write_to_file(policy_file, policy) else: log.info("ALPM control file ('%s') not found, skipping ALPM setting for '%s'" % (policy_file, str(device))) return None return policy @command_get("alpm") def _get_alpm(self, device, ignore_missing=False): policy_file = self._get_alpm_policy_file(device) policy = self._cmd.read_file(policy_file, no_error = True).strip() return policy if policy != "" else None 07070100000155000081A40000000000000000000000016391BC3A000008C4000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_selinux.pyimport os from . import base from .decorators import * import tuned.logs from tuned.plugins import exceptions from tuned.utils.commands import commands log = tuned.logs.get() class SelinuxPlugin(base.Plugin): """ `selinux`:: Plug-in for tuning SELinux options. + SELinux decisions, such as allowing or denying access, are cached. This cache is known as the Access Vector Cache (AVC). When using these cached decisions, SELinux policy rules need to be checked less, which increases performance. The [option]`avc_cache_threshold` option allows adjusting the maximum number of AVC entries. + NOTE: Prior to changing the default value, evaluate the system performance with care. Increasing the value could potentially decrease the performance by making AVC slow. + .Increase the AVC cache threshold for hosts with containers. ==== ---- [selinux] avc_cache_threshold=8192 ---- ==== """ @classmethod def _get_selinux_path(self): path = "/sys/fs/selinux" if not os.path.exists(path): path = "/selinux" if not os.path.exists(path): path = None return path def __init__(self, *args, **kwargs): self._cmd = commands() self._selinux_path = self._get_selinux_path() if self._selinux_path is None: raise exceptions.NotSupportedPluginException("SELinux is not enabled on your system or incompatible version is used.") self._cache_threshold_path = os.path.join(self._selinux_path, "avc", "cache_threshold") super(SelinuxPlugin, self).__init__(*args, **kwargs) @classmethod def _get_config_options(self): return { "avc_cache_threshold" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass @command_set("avc_cache_threshold") def _set_avc_cache_threshold(self, value, sim): if value is None: return None threshold = int(value) if threshold >= 0: if not sim: self._cmd.write_to_file(self._cache_threshold_path, threshold) return threshold else: return None @command_get("avc_cache_threshold") def _get_avc_cache_threshold(self): value = self._cmd.read_file(self._cache_threshold_path) if len(value) > 0: return int(value) return None 07070100000156000081A40000000000000000000000016391BC3A000029D7000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_service.pyfrom . import base import collections import tuned.consts as consts from .decorators import * import os import re import tuned.logs from tuned.utils.commands import commands log = tuned.logs.get() cmd = commands() class Service(): def __init__(self, start = None, enable = None, cfg_file = None, runlevel = None): self.enable = enable self.start = start self.cfg_file = cfg_file self.runlevel = runlevel class InitHandler(): def runlevel_get(self): (retcode, out) = cmd.execute(["runlevel"]) return out.split()[-1] if retcode == 0 else None def daemon_reload(self): cmd.execute(["telinit", "q"]) def cfg_install(self, name, cfg_file): pass def cfg_uninstall(self, name, cfg_file): pass def cfg_verify(self, name, cfg_file): return None # no enable/disable class SysVBasicHandler(InitHandler): def start(self, name): cmd.execute(["service", name, "start"]) def stop(self, name): cmd.execute(["service", name, "stop"]) def enable(self, name, runlevel): raise NotImplementedError() def disable(self, name, runlevel): raise NotImplementedError() def is_running(self, name): (retcode, out) = cmd.execute(["service", name, "status"], no_errors = [0]) return retcode == 0 def is_enabled(self, name, runlevel): raise NotImplementedError() class SysVHandler(SysVBasicHandler): def enable(self, name, runlevel): cmd.execute(["chkconfig", "--level", runlevel, name, "on"]) def disable(self, name, runlevel): cmd.execute(["chkconfig", "--level", runlevel, name, "off"]) def is_enabled(self, name, runlevel): (retcode, out) = cmd.execute(["chkconfig", "--list", name]) return out.split("%s:" % str(runlevel))[1][:2] == "on" if retcode == 0 else None class SysVRCHandler(SysVBasicHandler): def enable(self, name, runlevel): cmd.execute(["sysv-rc-conf", "--level", runlevel, name, "on"]) def disable(self, name, runlevel): cmd.execute(["sysv-rc-conf", "--level", runlevel, name, "off"]) def is_enabled(self, name, runlevel): (retcode, out) = cmd.execute(["sysv-rc-conf", "--list", name]) return out.split("%s:" % str(runlevel))[1][:2] == "on" if retcode == 0 else None class OpenRCHandler(InitHandler): def runlevel_get(self): (retcode, out) = cmd.execute(["rc-status", "-r"]) return out.strip() if retcode == 0 else None def start(self, name): cmd.execute(["rc-service", name, "start"]) def stop(self, name): cmd.execute(["rc-service", name, "stop"]) def enable(self, name, runlevel): cmd.execute(["rc-update", "add", name, runlevel]) def disable(self, name, runlevel): cmd.execute(["rc-update", "del", name, runlevel]) def is_running(self, name): (retcode, out) = cmd.execute(["rc-service", name, "status"], no_errors = [0]) return retcode == 0 def is_enabled(self, name, runlevel): (retcode, out) = cmd.execute(["rc-update", "show", runlevel]) return bool(re.search(r"\b" + re.escape(name) + r"\b", out)) class SystemdHandler(InitHandler): # runlevel not used def runlevel_get(self): return "" def start(self, name): cmd.execute(["systemctl", "restart", name]) def stop(self, name): cmd.execute(["systemctl", "stop", name]) def enable(self, name, runlevel): cmd.execute(["systemctl", "enable", name]) def disable(self, name, runlevel): cmd.execute(["systemctl", "disable", name]) def is_running(self, name): (retcode, out) = cmd.execute(["systemctl", "is-active", name], no_errors = [0]) return retcode == 0 def is_enabled(self, name, runlevel): (retcode, out) = cmd.execute(["systemctl", "is-enabled", name], no_errors = [0]) status = out.strip() return True if status == "enabled" else False if status =="disabled" else None def cfg_install(self, name, cfg_file): log.info("installing service configuration overlay file '%s' for service '%s'" % (cfg_file, name)) if not os.path.exists(cfg_file): log.error("Unable to find service configuration '%s'" % cfg_file) return dirpath = consts.SERVICE_SYSTEMD_CFG_PATH % name try: os.makedirs(dirpath, consts.DEF_SERVICE_CFG_DIR_MODE) except OSError as e: log.error("Unable to create directory '%s': %s" % (dirpath, e)) return cmd.copy(cfg_file, dirpath) self.daemon_reload() def cfg_uninstall(self, name, cfg_file): log.info("uninstalling service configuration overlay file '%s' for service '%s'" % (cfg_file, name)) dirpath = consts.SERVICE_SYSTEMD_CFG_PATH % name path = "%s/%s" % (dirpath, os.path.basename(cfg_file)) cmd.unlink(path) self.daemon_reload() # remove the service dir if empty, do not check for errors try: os.rmdir(dirpath) except (FileNotFoundError, OSError): pass def cfg_verify(self, name, cfg_file): if cfg_file is None: return None path = "%s/%s" % (consts.SERVICE_SYSTEMD_CFG_PATH % name, os.path.basename(cfg_file)) if not os.path.exists(cfg_file): log.error("Unable to find service '%s' configuration '%s'" % (name, cfg_file)) return False if not os.path.exists(path): log.error("Service '%s' configuration not installed in '%s'" % (name, path)) return False sha256sum1 = cmd.sha256sum(cfg_file) sha256sum2 = cmd.sha256sum(path) return sha256sum1 == sha256sum2 class ServicePlugin(base.Plugin): """ `service`:: Plug-in for handling sysvinit, sysv-rc, openrc and systemd services. + The syntax is as follows: + [subs="+quotes,+macros"] ---- [service] service.__service_name__=__commands__[,file:__file__] ---- + Supported service-handling `_commands_` are `start`, `stop`, `enable` and `disable`. The optional `file:__file__` directive installs an overlay configuration file `__file__`. Multiple commands must be comma (`,`) or semicolon (`;`) separated. If the directives conflict, the last one is used. + The service plugin supports configuration overlays only for systemd. In other init systems, this directive is ignored. The configuration overlay files are copied to `/etc/systemd/system/__service_name__.service.d/` directories. Upon profile unloading, the directory is removed if it is empty. + With systemd, the `start` command is implemented by `restart` in order to allow loading of the service configuration file overlay. + NOTE: With non-systemd init systems, the plug-in operates on the current runlevel only. + .Start and enable the `sendmail` service with an overlay file ==== ---- [service] service.sendmail=start,enable,file:${i:PROFILE_DIR}/tuned-sendmail.conf ---- The internal variable `${i:PROFILE_DIR}` points to the directory from which the profile is loaded. ==== """ def __init__(self, *args, **kwargs): super(ServicePlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._init_handler = self._detect_init_system() def _check_cmd(self, command): (retcode, out) = cmd.execute(command, no_errors = [0]) return retcode == 0 def _detect_init_system(self): if self._check_cmd(["systemctl", "status"]): log.debug("detected systemd") return SystemdHandler() elif self._check_cmd(["chkconfig"]): log.debug("detected generic sysvinit") return SysVHandler() elif self._check_cmd(["update-rc.d", "-h"]): log.debug("detected sysv-rc") return SysVRCHandler() elif self._check_cmd(["rc-update", "-h"]): log.debug("detected openrc") return OpenRCHandler() else: raise exceptions.NotSupportedPluginException("Unable to detect your init system, disabling the plugin.") def _parse_service_options(self, name, val): l = re.split(r"\s*[,;]\s*", val) service = Service() for i in l: if i == "enable": service.enable = True elif i == "disable": service.enable = False elif i == "start": service.start = True elif i == "stop": service.start = False elif i[:5] == "file:": service.cfg_file = i[5:] else: log.error("service '%s': invalid service option: '%s'" % (name, i)) return service def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True self._services = collections.OrderedDict([(option[8:], self._parse_service_options(option[8:], self._variables.expand(value))) for option, value in instance.options.items() if option[:8] == "service." and len(option) > 8]) instance._services_original = {} def _instance_cleanup(self, instance): pass def _process_service(self, name, start, enable, runlevel): if start: self._init_handler.start(name) elif start is not None: self._init_handler.stop(name) if enable: self._init_handler.enable(name, runlevel) elif enable is not None: self._init_handler.disable(name, runlevel) def _instance_apply_static(self, instance): runlevel = self._init_handler.runlevel_get() if runlevel is None: log.error("Cannot detect runlevel") return for service in self._services.items(): is_enabled = self._init_handler.is_enabled(service[0], runlevel) is_running = self._init_handler.is_running(service[0]) instance._services_original[service[0]] = Service(is_running, is_enabled, service[1].cfg_file, runlevel) if service[1].cfg_file: self._init_handler.cfg_install(service[0], service[1].cfg_file) self._process_service(service[0], service[1].start, service[1].enable, runlevel) def _instance_verify_static(self, instance, ignore_missing, devices): runlevel = self._init_handler.runlevel_get() if runlevel is None: log.error(consts.STR_VERIFY_PROFILE_FAIL % "cannot detect runlevel") return False ret = True for service in self._services.items(): ret_cfg_verify = self._init_handler.cfg_verify(service[0], service[1].cfg_file) if ret_cfg_verify: log.info(consts.STR_VERIFY_PROFILE_OK % "service '%s' configuration '%s' matches" % (service[0], service[1].cfg_file)) elif ret_cfg_verify is not None: log.error(consts.STR_VERIFY_PROFILE_FAIL % "service '%s' configuration '%s' differs" % (service[0], service[1].cfg_file)) ret = False else: log.info(consts.STR_VERIFY_PROFILE_VALUE_MISSING % "service '%s' configuration '%s'" % (service[0], service[1].cfg_file)) is_enabled = self._init_handler.is_enabled(service[0], runlevel) is_running = self._init_handler.is_running(service[0]) if self._verify_value("%s running" % service[0], service[1].start, is_running, ignore_missing) is False: ret = False if self._verify_value("%s enabled" % service[0], service[1].enable, is_enabled, ignore_missing) is False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): for name, value in list(instance._services_original.items()): if value.cfg_file: self._init_handler.cfg_uninstall(name, value.cfg_file) self._process_service(name, value.start, value.enable, value.runlevel) 07070100000157000081A40000000000000000000000016391BC3A000017ED000000000000000000000000000000000000003B00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_sysctl.pyimport re from . import base from .decorators import * import tuned.logs from subprocess import * from tuned.utils.commands import commands import tuned.consts as consts import errno import os log = tuned.logs.get() DEPRECATED_SYSCTL_OPTIONS = [ "base_reachable_time", "retrans_time" ] SYSCTL_CONFIG_DIRS = [ "/run/sysctl.d", "/etc/sysctl.d" ] class SysctlPlugin(base.Plugin): """ `sysctl`:: Sets various kernel parameters at runtime. + This plug-in is used for applying custom `sysctl` settings and should only be used to change system settings that are not covered by other *TuneD* plug-ins. If the settings are covered by other *TuneD* plug-ins, use those plug-ins instead. + The syntax for this plug-in is `_key_=_value_`, where `_key_` is the same as the key name provided by the `sysctl` utility. + .Adjusting the kernel runtime kernel.sched_min_granularity_ns value ==== ---- [sysctl] kernel.sched_min_granularity_ns=3000000 ---- ==== """ def __init__(self, *args, **kwargs): super(SysctlPlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True # FIXME: do we want to do this here? # recover original values in case of crash storage_key = self._storage_key(instance.name) instance._sysctl_original = self._storage.get(storage_key, {}) if len(instance._sysctl_original) > 0: log.info("recovering old sysctl settings from previous run") self._instance_unapply_static(instance) instance._sysctl_original = {} self._storage.unset(storage_key) instance._sysctl = instance.options def _instance_cleanup(self, instance): storage_key = self._storage_key(instance.name) self._storage.unset(storage_key) def _instance_apply_static(self, instance): for option, value in list(instance._sysctl.items()): original_value = _read_sysctl(option) if original_value is None: log.error("sysctl option %s will not be set, failed to read the original value." % option) else: new_value = self._variables.expand( self._cmd.unquote(value)) new_value = self._process_assignment_modifiers( new_value, original_value) if new_value is not None: instance._sysctl_original[option] = original_value _write_sysctl(option, new_value) storage_key = self._storage_key(instance.name) self._storage.set(storage_key, instance._sysctl_original) if self._global_cfg.get_bool(consts.CFG_REAPPLY_SYSCTL, consts.CFG_DEF_REAPPLY_SYSCTL): log.info("reapplying system sysctl") _apply_system_sysctl() def _instance_verify_static(self, instance, ignore_missing, devices): ret = True # override, so always skip missing ignore_missing = True for option, value in list(instance._sysctl.items()): curr_val = _read_sysctl(option) value = self._process_assignment_modifiers(self._variables.expand(value), curr_val) if value is not None: if self._verify_value(option, self._cmd.remove_ws(value), self._cmd.remove_ws(curr_val), ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): for option, value in list(instance._sysctl_original.items()): _write_sysctl(option, value) def _apply_system_sysctl(): files = {} for d in SYSCTL_CONFIG_DIRS: try: flist = os.listdir(d) except OSError: continue for fname in flist: if not fname.endswith(".conf"): continue if fname not in files: files[fname] = d for fname in sorted(files.keys()): d = files[fname] path = "%s/%s" % (d, fname) _apply_sysctl_config_file(path) _apply_sysctl_config_file("/etc/sysctl.conf") def _apply_sysctl_config_file(path): log.debug("Applying sysctl settings from file %s" % path) try: with open(path, "r") as f: for lineno, line in enumerate(f, 1): _apply_sysctl_config_line(path, lineno, line) log.debug("Finished applying sysctl settings from file %s" % path) except (OSError, IOError) as e: if e.errno != errno.ENOENT: log.error("Error reading sysctl settings from file %s: %s" % (path, str(e))) def _apply_sysctl_config_line(path, lineno, line): line = line.strip() if len(line) == 0 or line[0] == "#" or line[0] == ";": return tmp = line.split("=", 1) if len(tmp) != 2: log.error("Syntax error in file %s, line %d" % (path, lineno)) return option, value = tmp option = option.strip() if len(option) == 0: log.error("Syntax error in file %s, line %d" % (path, lineno)) return value = value.strip() _write_sysctl(option, value, ignore_missing = True) def _get_sysctl_path(option): return "/proc/sys/%s" % option.replace(".", "/") def _read_sysctl(option): path = _get_sysctl_path(option) try: with open(path, "r") as f: line = "" for i, line in enumerate(f): if i > 0: log.error("Failed to read sysctl parameter '%s', multi-line values are unsupported" % option) return None value = line.strip() log.debug("Value of sysctl parameter '%s' is '%s'" % (option, value)) return value except (OSError, IOError) as e: if e.errno == errno.ENOENT: log.error("Failed to read sysctl parameter '%s', the parameter does not exist" % option) else: log.error("Failed to read sysctl parameter '%s': %s" % (option, str(e))) return None def _write_sysctl(option, value, ignore_missing = False): path = _get_sysctl_path(option) if os.path.basename(path) in DEPRECATED_SYSCTL_OPTIONS: log.error("Refusing to set deprecated sysctl option %s" % option) return False try: log.debug("Setting sysctl parameter '%s' to '%s'" % (option, value)) with open(path, "w") as f: f.write(value) return True except (OSError, IOError) as e: if e.errno == errno.ENOENT: log_func = log.debug if ignore_missing else log.error log_func("Failed to set sysctl parameter '%s' to '%s', the parameter does not exist" % (option, value)) else: log.error("Failed to set sysctl parameter '%s' to '%s': %s" % (option, value, str(e))) return False 07070100000158000081A40000000000000000000000016391BC3A00000A5E000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_sysfs.pyfrom . import base import glob import re import os.path from .decorators import * import tuned.logs from subprocess import * from tuned.utils.commands import commands log = tuned.logs.get() class SysfsPlugin(base.Plugin): """ `sysfs`:: Sets various `sysfs` settings specified by the plug-in options. + The syntax is `_name_=_value_`, where `_name_` is the `sysfs` path to use and `_value_` is the value to write. The `sysfs` path supports the shell-style wildcard characters (see `man 7 glob` for additional detail). + Use this plugin in case you need to change some settings that are not covered by other plug-ins. Prefer specific plug-ins if they cover the required settings. + .Ignore corrected errors and associated scans that cause latency spikes ==== ---- [sysfs] /sys/devices/system/machinecheck/machinecheck*/ignore_ce=1 ---- ==== """ # TODO: resolve possible conflicts with sysctl settings from other plugins def __init__(self, *args, **kwargs): super(SysfsPlugin, self).__init__(*args, **kwargs) self._has_dynamic_options = True self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True instance._sysfs = dict([(os.path.normpath(key_value[0]), key_value[1]) for key_value in list(instance.options.items())]) instance._sysfs_original = {} def _instance_cleanup(self, instance): pass def _instance_apply_static(self, instance): for key, value in list(instance._sysfs.items()): v = self._variables.expand(value) for f in glob.iglob(key): if self._check_sysfs(f): instance._sysfs_original[f] = self._read_sysfs(f) self._write_sysfs(f, v) else: log.error("rejecting write to '%s' (not inside /sys)" % f) def _instance_verify_static(self, instance, ignore_missing, devices): ret = True for key, value in list(instance._sysfs.items()): v = self._variables.expand(value) for f in glob.iglob(key): if self._check_sysfs(f): curr_val = self._read_sysfs(f) if self._verify_value(f, v, curr_val, ignore_missing) == False: ret = False return ret def _instance_unapply_static(self, instance, full_rollback = False): for key, value in list(instance._sysfs_original.items()): self._write_sysfs(key, value) def _check_sysfs(self, sysfs_file): return re.match(r"^/sys/.*", sysfs_file) def _read_sysfs(self, sysfs_file): data = self._cmd.read_file(sysfs_file).strip() if len(data) > 0: return self._cmd.get_active_option(data, False) else: return None def _write_sysfs(self, sysfs_file, value): return self._cmd.write_to_file(sysfs_file, value) 07070100000159000081A40000000000000000000000016391BC3A00001517000000000000000000000000000000000000003C00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_systemd.pyfrom . import base from .decorators import * import tuned.logs from . import exceptions from tuned.utils.commands import commands import tuned.consts as consts import os import re log = tuned.logs.get() class SystemdPlugin(base.Plugin): """ `systemd`:: Plug-in for tuning systemd options. + The [option]`cpu_affinity` option allows setting CPUAffinity in `/etc/systemd/system.conf`. This configures the CPU affinity for the service manager as well as the default CPU affinity for all forked off processes. The option takes a comma-separated list of CPUs with optional CPU ranges specified by the minus sign (`-`). + .Set the CPUAffinity for `systemd` to `0 1 2 3` ==== ---- [systemd] cpu_affinity=0-3 ---- ==== + NOTE: These tunings are unloaded only on profile change followed by a reboot. """ def __init__(self, *args, **kwargs): if not os.path.isfile(consts.SYSTEMD_SYSTEM_CONF_FILE): raise exceptions.NotSupportedPluginException("Required systemd '%s' configuration file not found, disabling plugin." % consts.SYSTEMD_SYSTEM_CONF_FILE) super(SystemdPlugin, self).__init__(*args, **kwargs) self._cmd = commands() def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass @classmethod def _get_config_options(cls): return { "cpu_affinity": None, } def _get_keyval(self, conf, key): if conf is not None: mo = re.search(r"^\s*" + key + r"\s*=\s*(.*)$", conf, flags = re.MULTILINE) if mo is not None and mo.lastindex == 1: return mo.group(1) return None # add/replace key with the value def _add_keyval(self, conf, key, val): (conf_new, nsubs) = re.subn(r"^(\s*" + key + r"\s*=).*$", r"\g<1>" + str(val), conf, flags = re.MULTILINE) if nsubs < 1: try: if conf[-1] != "\n": conf += "\n" except IndexError: pass conf += key + "=" + str(val) + "\n" return conf return conf_new def _del_key(self, conf, key): return re.sub(r"^\s*" + key + r"\s*=.*\n", "", conf, flags = re.MULTILINE) def _read_systemd_system_conf(self): systemd_system_conf = self._cmd.read_file(consts.SYSTEMD_SYSTEM_CONF_FILE, err_ret = None) if systemd_system_conf is None: log.error("error reading systemd configuration file") return None return systemd_system_conf def _write_systemd_system_conf(self, conf): tmpfile = consts.SYSTEMD_SYSTEM_CONF_FILE + consts.TMP_FILE_SUFFIX if not self._cmd.write_to_file(tmpfile, conf): log.error("error writing systemd configuration file") self._cmd.unlink(tmpfile, no_error = True) return False # Atomic replace, this doesn't work on Windows (AFAIK there is no way on Windows how to do this # atomically), but it's unlikely this code will run there if not self._cmd.rename(tmpfile, consts.SYSTEMD_SYSTEM_CONF_FILE): log.error("error replacing systemd configuration file '%s'" % consts.SYSTEMD_SYSTEM_CONF_FILE) self._cmd.unlink(tmpfile, no_error = True) return False return True def _get_storage_filename(self): return os.path.join(consts.PERSISTENT_STORAGE_DIR, self.name) def _remove_systemd_tuning(self): conf = self._read_systemd_system_conf() if (conf is not None): fname = self._get_storage_filename() cpu_affinity_saved = self._cmd.read_file(fname, err_ret = None, no_error = True) self._cmd.unlink(fname) if cpu_affinity_saved is None: conf = self._del_key(conf, consts.SYSTEMD_CPUAFFINITY_VAR) else: conf = self._add_keyval(conf, consts.SYSTEMD_CPUAFFINITY_VAR, cpu_affinity_saved) self._write_systemd_system_conf(conf) def _instance_unapply_static(self, instance, full_rollback = False): if full_rollback: log.info("removing '%s' systemd tuning previously added by TuneD" % consts.SYSTEMD_CPUAFFINITY_VAR) self._remove_systemd_tuning() log.console("you may need to manualy run 'dracut -f' to update the systemd configuration in initrd image") # convert cpulist from systemd syntax to TuneD syntax and unpack it def _cpulist_convert_unpack(self, cpulist): if cpulist is None: return "" return " ".join(str(v) for v in self._cmd.cpulist_unpack(re.sub(r"\s+", r",", re.sub(r",\s+", r",", cpulist)))) @command_custom("cpu_affinity", per_device = False) def _cmdline(self, enabling, value, verify, ignore_missing): conf_affinity = None conf_affinity_unpacked = None v = self._cmd.unescape(self._variables.expand(self._cmd.unquote(value))) v_unpacked = " ".join(str(v) for v in self._cmd.cpulist_unpack(v)) conf = self._read_systemd_system_conf() if conf is not None: conf_affinity = self._get_keyval(conf, consts.SYSTEMD_CPUAFFINITY_VAR) conf_affinity_unpacked = self._cpulist_convert_unpack(conf_affinity) if verify: return self._verify_value("cpu_affinity", v_unpacked, conf_affinity_unpacked, ignore_missing) if enabling: fname = self._get_storage_filename() cpu_affinity_saved = self._cmd.read_file(fname, err_ret = None, no_error = True) if conf_affinity is not None and cpu_affinity_saved is None and v_unpacked != conf_affinity_unpacked: self._cmd.write_to_file(fname, conf_affinity, makedir = True) log.info("setting '%s' to '%s' in the '%s'" % (consts.SYSTEMD_CPUAFFINITY_VAR, v_unpacked, consts.SYSTEMD_SYSTEM_CONF_FILE)) self._write_systemd_system_conf(self._add_keyval(conf, consts.SYSTEMD_CPUAFFINITY_VAR, v_unpacked)) 0707010000015A000081A40000000000000000000000016391BC3A00000794000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_usb.pyfrom . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import glob log = tuned.logs.get() class USBPlugin(base.Plugin): """ `usb`:: Sets autosuspend timeout of USB devices to the value specified by the [option]`autosuspend` option in seconds. If the [option]`devices` option is specified, the [option]`autosuspend` option applies to only the USB devices specified, otherwise it applies to all USB devices. + The value `0` means that autosuspend is disabled. + .To turn off USB autosuspend for USB devices `1-1` and `1-2` ==== ---- [usb] devices=1-1,1-2 autosuspend=0 ---- ==== """ def _init_devices(self): self._devices_supported = True self._free_devices = set() self._assigned_devices = set() for device in self._hardware_inventory.get_devices("usb").match_property("DEVTYPE", "usb_device"): self._free_devices.add(device.sys_name) self._cmd = commands() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("usb", x) for x in devices] @classmethod def _get_config_options(self): return { "autosuspend" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass def _autosuspend_sysfile(self, device): return "/sys/bus/usb/devices/%s/power/autosuspend" % device @command_set("autosuspend", per_device=True) def _set_autosuspend(self, value, device, sim): enable = self._option_bool(value) if enable is None: return None val = "1" if enable else "0" if not sim: sys_file = self._autosuspend_sysfile(device) self._cmd.write_to_file(sys_file, val) return val @command_get("autosuspend") def _get_autosuspend(self, device, ignore_missing=False): sys_file = self._autosuspend_sysfile(device) return self._cmd.read_file(sys_file, no_error=ignore_missing).strip() 0707010000015B000081A40000000000000000000000016391BC3A00000DAB000000000000000000000000000000000000003A00000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_video.pyfrom . import base from .decorators import * import tuned.logs from tuned.utils.commands import commands import os import re log = tuned.logs.get() class VideoPlugin(base.Plugin): """ `video`:: Sets various powersave levels on video cards. Currently, only the Radeon cards are supported. The powersave level can be specified by using the [option]`radeon_powersave` option. Supported values are: + -- * `default` * `auto` * `low` * `mid` * `high` * `dynpm` * `dpm-battery` * `dpm-balanced` * `dpm-perfomance` -- + For additional detail, see link:https://www.x.org/wiki/RadeonFeature/#kmspowermanagementoptions[KMS Power Management Options]. + NOTE: This plug-in is experimental and the option might change in future releases. + .To set the powersave level for the Radeon video card to high ==== ---- [video] radeon_powersave=high ---- ==== """ def _init_devices(self): self._devices_supported = True self._free_devices = set() self._assigned_devices = set() # FIXME: this is a blind shot, needs testing for device in self._hardware_inventory.get_devices("drm").match_sys_name("card*").match_property("DEVTYPE", "drm_minor"): self._free_devices.add(device.sys_name) self._cmd = commands() def _get_device_objects(self, devices): return [self._hardware_inventory.get_device("drm", x) for x in devices] @classmethod def _get_config_options(self): return { "radeon_powersave" : None, } def _instance_init(self, instance): instance._has_dynamic_tuning = False instance._has_static_tuning = True def _instance_cleanup(self, instance): pass def _radeon_powersave_files(self, device): return { "method" : "/sys/class/drm/%s/device/power_method" % device, "profile": "/sys/class/drm/%s/device/power_profile" % device, "dpm_state": "/sys/class/drm/%s/device/power_dpm_state" % device } @command_set("radeon_powersave", per_device=True) def _set_radeon_powersave(self, value, device, sim): sys_files = self._radeon_powersave_files(device) va = str(re.sub(r"(\s*:\s*)|(\s+)|(\s*;\s*)|(\s*,\s*)", " ", value)).split() if not os.path.exists(sys_files["method"]): if not sim: log.warn("radeon_powersave is not supported on '%s'" % device) return None for v in va: if v in ["default", "auto", "low", "mid", "high"]: if not sim: if (self._cmd.write_to_file(sys_files["method"], "profile") and self._cmd.write_to_file(sys_files["profile"], v)): return v elif v == "dynpm": if not sim: if (self._cmd.write_to_file(sys_files["method"], "dynpm")): return "dynpm" # new DPM profiles, recommended to use if supported elif v in ["dpm-battery", "dpm-balanced", "dpm-performance"]: if not sim: state = v[len("dpm-"):] if (self._cmd.write_to_file(sys_files["method"], "dpm") and self._cmd.write_to_file(sys_files["dpm_state"], state)): return v else: if not sim: log.warn("Invalid option for radeon_powersave.") return None return None @command_get("radeon_powersave") def _get_radeon_powersave(self, device, ignore_missing = False): sys_files = self._radeon_powersave_files(device) method = self._cmd.read_file(sys_files["method"], no_error=ignore_missing).strip() if method == "profile": return self._cmd.read_file(sys_files["profile"]).strip() elif method == "dynpm": return method elif method == "dpm": return "dpm-" + self._cmd.read_file(sys_files["dpm_state"]).strip() else: return None 0707010000015C000081A40000000000000000000000016391BC3A00000D4E000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/plugin_vm.pyfrom . import base from .decorators import * import tuned.logs import os import struct import glob from tuned.utils.commands import commands log = tuned.logs.get() cmd = commands() class VMPlugin(base.Plugin): """ `vm`:: Enables or disables transparent huge pages depending on value of the [option]`transparent_hugepages` option. The option can have one of three possible values `always`, `madvise` and `never`. + .Disable transparent hugepages ==== ---- [vm] transparent_hugepages=never ---- ==== + The [option]`transparent_hugepage.defrag` option specifies the defragmentation policy. Possible values for this option are `always`, `defer`, `defer+madvise`, `madvise` and `never`. For a detailed explanation of these values refer to link:https://www.kernel.org/doc/Documentation/vm/transhuge.txt[Transparent Hugepage Support]. """ @classmethod def _get_config_options(self): return { "transparent_hugepages" : None, "transparent_hugepage" : None, "transparent_hugepage.defrag" : None, } def _instance_init(self, instance): instance._has_static_tuning = True instance._has_dynamic_tuning = False def _instance_cleanup(self, instance): pass @classmethod def _thp_path(self): path = "/sys/kernel/mm/transparent_hugepage" if not os.path.exists(path): # RHEL-6 support path = "/sys/kernel/mm/redhat_transparent_hugepage" return path @command_set("transparent_hugepages") def _set_transparent_hugepages(self, value, sim): if value not in ["always", "never", "madvise"]: if not sim: log.warn("Incorrect 'transparent_hugepages' value '%s'." % str(value)) return None cmdline = cmd.read_file("/proc/cmdline", no_error = True) if cmdline.find("transparent_hugepage=") > 0: if not sim: log.info("transparent_hugepage is already set in kernel boot cmdline, ignoring value from profile") return None sys_file = os.path.join(self._thp_path(), "enabled") if os.path.exists(sys_file): if not sim: cmd.write_to_file(sys_file, value) return value else: if not sim: log.warn("Option 'transparent_hugepages' is not supported on current hardware.") return None # just an alias to transparent_hugepages @command_set("transparent_hugepage") def _set_transparent_hugepage(self, value, sim): self._set_transparent_hugepages(value, sim) @command_get("transparent_hugepages") def _get_transparent_hugepages(self): sys_file = os.path.join(self._thp_path(), "enabled") if os.path.exists(sys_file): return cmd.get_active_option(cmd.read_file(sys_file)) else: return None # just an alias to transparent_hugepages @command_get("transparent_hugepage") def _get_transparent_hugepage(self): return self._get_transparent_hugepages() @command_set("transparent_hugepage.defrag") def _set_transparent_hugepage_defrag(self, value, sim): sys_file = os.path.join(self._thp_path(), "defrag") if os.path.exists(sys_file): if not sim: cmd.write_to_file(sys_file, value) return value else: if not sim: log.warn("Option 'transparent_hugepage.defrag' is not supported on current hardware.") return None @command_get("transparent_hugepage.defrag") def _get_transparent_hugepage_defrag(self): sys_file = os.path.join(self._thp_path(), "defrag") if os.path.exists(sys_file): return cmd.get_active_option(cmd.read_file(sys_file)) else: return None 0707010000015D000081A40000000000000000000000016391BC3A000005FA000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/plugins/repository.pyfrom tuned.utils.plugin_loader import PluginLoader import tuned.plugins.base import tuned.logs log = tuned.logs.get() __all__ = ["Repository"] class Repository(PluginLoader): def __init__(self, monitor_repository, storage_factory, hardware_inventory, device_matcher, device_matcher_udev, plugin_instance_factory, global_cfg, variables): super(Repository, self).__init__() self._plugins = set() self._monitor_repository = monitor_repository self._storage_factory = storage_factory self._hardware_inventory = hardware_inventory self._device_matcher = device_matcher self._device_matcher_udev = device_matcher_udev self._plugin_instance_factory = plugin_instance_factory self._global_cfg = global_cfg self._variables = variables @property def plugins(self): return self._plugins def _set_loader_parameters(self): self._namespace = "tuned.plugins" self._prefix = "plugin_" self._interface = tuned.plugins.base.Plugin def create(self, plugin_name): log.debug("creating plugin %s" % plugin_name) plugin_cls = self.load_plugin(plugin_name) plugin_instance = plugin_cls(self._monitor_repository, self._storage_factory, self._hardware_inventory, self._device_matcher,\ self._device_matcher_udev, self._plugin_instance_factory, self._global_cfg, self._variables) self._plugins.add(plugin_instance) return plugin_instance def delete(self, plugin): assert isinstance(plugin, self._interface) log.debug("removing plugin %s" % plugin) plugin.cleanup() self._plugins.remove(plugin) 0707010000015E000041ED0000000000000000000000036391BC3A00000000000000000000000000000000000000000000002B00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles0707010000015F000081A40000000000000000000000016391BC3A00000119000000000000000000000000000000000000003700000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/__init__.pyfrom tuned.profiles.locator import * from tuned.profiles.loader import * from tuned.profiles.profile import * from tuned.profiles.unit import * from tuned.profiles.exceptions import * from tuned.profiles.factory import * from tuned.profiles.merger import * from . import functions 07070100000160000081A40000000000000000000000016391BC3A0000005F000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/exceptions.pyimport tuned.exceptions class InvalidProfileException(tuned.exceptions.TunedException): pass 07070100000161000081A40000000000000000000000016391BC3A0000008D000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/factory.pyimport tuned.profiles.profile class Factory(object): def create(self, name, config): return tuned.profiles.profile.Profile(name, config) 07070100000162000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions07070100000163000081A40000000000000000000000016391BC3A00000023000000000000000000000000000000000000004100000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/__init__.pyfrom .repository import Repository 07070100000164000081A40000000000000000000000016391BC3A00000410000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/base.pyimport os import tuned.logs from tuned.utils.commands import commands log = tuned.logs.get() class Function(object): """ Built-in function """ def __init__(self, name, nargs_max, nargs_min = None): self._name = name self._nargs_max = nargs_max self._nargs_min = nargs_min self._cmd = commands() # checks arguments # nargs_max - maximal number of arguments, there mustn't be more arguments, # if nargs_max is 0, number of arguments is unlimited # nargs_min - minimal number of arguments, if not None there must # be the same number of arguments or more @classmethod def _check_args(cls, args, nargs_max, nargs_min = None): if args is None or nargs_max is None: return False la = len(args) return (nargs_max == 0 or nargs_max >= la) and (nargs_min is None or nargs_min <= la) def execute(self, args): if self._check_args(args, self._nargs_max, self._nargs_min): return True else: log.error("invalid number of arguments for builtin function '%s'" % self._name) return False 07070100000165000081A40000000000000000000000016391BC3A000002FF000000000000000000000000000000000000004B00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_assertion.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands from tuned.profiles.exceptions import InvalidProfileException log = tuned.logs.get() class assertion(base.Function): """ Assertion: compares argument 2 with argument 3. If they don't match it logs text from argument 1 and throws InvalidProfileException. This exception will abort profile loading. """ def __init__(self): # 3 arguments super(assertion, self).__init__("assertion", 3, 3) def execute(self, args): if not super(assertion, self).execute(args): return None if args[1] != args[2]: log.error("assertion '%s' failed: '%s' != '%s'" % (args[0], args[1], args[2])) raise InvalidProfileException("Assertion '%s' failed." % args[0]) return None 07070100000166000081A40000000000000000000000016391BC3A0000032B000000000000000000000000000000000000005500000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_assertion_non_equal.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands from tuned.profiles.exceptions import InvalidProfileException log = tuned.logs.get() class assertion_non_equal(base.Function): """ Assertion non equal: compares argument 2 with argument 3. If they match it logs text from argument 1 and throws InvalidProfileException. This exception will abort profile loading. """ def __init__(self): # 3 arguments super(assertion_non_equal, self).__init__("assertion_non_equal", 3, 3) def execute(self, args): if not super(assertion_non_equal, self).execute(args): return None if args[1] == args[2]: log.error("assertion '%s' failed: '%s' == '%s'" % (args[0], args[1], args[2])) raise InvalidProfileException("Assertion '%s' failed." % args[0]) return None 07070100000167000081A40000000000000000000000016391BC3A00000544000000000000000000000000000000000000005500000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_calc_isolated_cores.pyimport os import glob import tuned.logs from . import base import tuned.consts as consts log = tuned.logs.get() class calc_isolated_cores(base.Function): """ Calculates and returns isolated cores. The argument specifies how many cores per socket reserve for housekeeping. If not specified, 1 core per socket is reserved for housekeeping and the rest is isolated. """ def __init__(self): # max 1 argument super(calc_isolated_cores, self).__init__("calc_isolated_cores", 1) def execute(self, args): if not super(calc_isolated_cores, self).execute(args): return None cpus_reserve = 1 if len(args) > 0: if not args[0].isdecimal() or int(args[0]) < 0: log.error("invalid argument '%s' for builtin function '%s', it must be non-negative integer" % (args[0], self._name)) return None else: cpus_reserve = int(args[0]) topo = {} for cpu in glob.iglob(os.path.join(consts.SYSFS_CPUS_PATH, "cpu*")): cpuid = os.path.basename(cpu)[3:] if cpuid.isdecimal(): socket = self._cmd.read_file(os.path.join(cpu, "topology/physical_package_id")).strip() if socket.isdecimal(): topo[socket] = topo.get(socket, []) + [cpuid] isol_cpus = [] for cpus in topo.values(): cpus.sort(key = int) isol_cpus = isol_cpus + cpus[cpus_reserve:] isol_cpus.sort(key = int) return ",".join(isol_cpus) 07070100000168000081A40000000000000000000000016391BC3A0000028A000000000000000000000000000000000000005700000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_check_net_queue_count.pyimport tuned.logs from . import base log = tuned.logs.get() class check_net_queue_count(base.Function): """ Checks whether the user has specified a queue count for net devices. If not, return the number of housekeeping CPUs. """ def __init__(self): # 1 argument super(check_net_queue_count, self).__init__("check_net_queue_count", 1, 1) def execute(self, args): if not super(check_net_queue_count, self).execute(args): return None if args[0].isdigit(): return args[0] (ret, out) = self._cmd.execute(["nproc"]) log.warn("net-dev queue count is not correctly specified, setting it to HK CPUs %s" % (out)) return out 07070100000169000081A40000000000000000000000016391BC3A000003F3000000000000000000000000000000000000004F00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpuinfo_check.pyimport re import tuned.logs from . import base log = tuned.logs.get() class cpuinfo_check(base.Function): """ Checks regexes against /proc/cpuinfo. Accepts arguments in the following form: REGEX1, STR1, REGEX2, STR2, ...[, STR_FALLBACK] If REGEX1 matches something in /proc/cpuinfo it expands to STR1, if REGEX2 matches it expands to STR2. It stops on the first match, i.e. if REGEX1 matches, no more regexes are processed. If none regex matches it expands to STR_FALLBACK. If there is no fallback, it expands to empty string. """ def __init__(self): # unlimited number of arguments, min 2 arguments super(cpuinfo_check, self).__init__("cpuinfo_check", 0, 2) def execute(self, args): if not super(cpuinfo_check, self).execute(args): return None cpuinfo = self._cmd.read_file("/proc/cpuinfo") for i in range(0, len(args), 2): if i + 1 < len(args): if re.search(args[i], cpuinfo, re.MULTILINE): return args[i + 1] if len(args) % 2: return args[-1] else: return "" 0707010000016A000081A40000000000000000000000016391BC3A000001D6000000000000000000000000000000000000004D00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist2hex.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist2hex(base.Function): """ Conversion function: converts CPU list to hexadecimal CPU mask """ def __init__(self): # arbitrary number of arguments super(cpulist2hex, self).__init__("cpulist2hex", 0) def execute(self, args): if not super(cpulist2hex, self).execute(args): return None return self._cmd.cpulist2hex(",,".join(args)) 0707010000016B000081A40000000000000000000000016391BC3A00000270000000000000000000000000000000000000005400000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist2hex_invert.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist2hex_invert(base.Function): """ Converts CPU list to hexadecimal CPU mask and inverts it """ def __init__(self): # arbitrary number of arguments super(cpulist2hex_invert, self).__init__("cpulist2hex_invert", 0) def execute(self, args): if not super(cpulist2hex_invert, self).execute(args): return None # current implementation inverts the CPU list and then converts it to hexmask return self._cmd.cpulist2hex(",".join(str(v) for v in self._cmd.cpulist_invert(",,".join(args)))) 0707010000016C000081A40000000000000000000000016391BC3A00000293000000000000000000000000000000000000005000000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist_invert.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_invert(base.Function): """ Inverts list of CPUs (makes its complement). For the complement it gets number of online CPUs from the /sys/devices/system/cpu/online, e.g. system with 4 CPUs (0-3), the inversion of list "0,2,3" will be "1" """ def __init__(self): # arbitrary number of arguments super(cpulist_invert, self).__init__("cpulist_invert", 0) def execute(self, args): if not super(cpulist_invert, self).execute(args): return None return ",".join(str(v) for v in self._cmd.cpulist_invert(",,".join(args))) 0707010000016D000081A40000000000000000000000016391BC3A0000028B000000000000000000000000000000000000005000000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist_online.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_online(base.Function): """ Checks whether CPUs from list are online, returns list containing only online CPUs """ def __init__(self): # arbitrary number of arguments super(cpulist_online, self).__init__("cpulist_online", 0) def execute(self, args): if not super(cpulist_online, self).execute(args): return None cpus = self._cmd.cpulist_unpack(",".join(args)) online = self._cmd.cpulist_unpack(self._cmd.read_file("/sys/devices/system/cpu/online")) return ",".join(str(v) for v in cpus if v in online) 0707010000016E000081A40000000000000000000000016391BC3A0000027D000000000000000000000000000000000000004E00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist_pack.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_pack(base.Function): """ Conversion function: packs CPU list in form 1,2,3,5 to 1-3,5. The cpulist_unpack is used as a preprocessor, so it always returns optimal results. For details about input syntax see cpulist_unpack. """ def __init__(self): # arbitrary number of arguments super(cpulist_pack, self).__init__("cpulist_pack", 0) def execute(self, args): if not super(cpulist_pack, self).execute(args): return None return ",".join(str(v) for v in self._cmd.cpulist_pack(",,".join(args))) 0707010000016F000081A40000000000000000000000016391BC3A000002B3000000000000000000000000000000000000005100000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist_present.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_present(base.Function): """ Checks whether CPUs from list are present, returns list containing only present CPUs """ def __init__(self): # arbitrary number of arguments super(cpulist_present, self).__init__("cpulist_present", 0) def execute(self, args): if not super(cpulist_present, self).execute(args): return None cpus = self._cmd.cpulist_unpack(",,".join(args)) present = self._cmd.cpulist_unpack(self._cmd.read_file("/sys/devices/system/cpu/present")) return ",".join(str(v) for v in sorted(list(set(cpus).intersection(set(present))))) 07070100000170000081A40000000000000000000000016391BC3A000001FF000000000000000000000000000000000000005000000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_cpulist_unpack.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class cpulist_unpack(base.Function): """ Conversion function: unpacks CPU list in form 1-3,4 to 1,2,3,4 """ def __init__(self): # arbitrary number of arguments super(cpulist_unpack, self).__init__("cpulist_unpack", 0) def execute(self, args): if not super(cpulist_unpack, self).execute(args): return None return ",".join(str(v) for v in self._cmd.cpulist_unpack(",,".join(args))) 07070100000171000081A40000000000000000000000016391BC3A000001E7000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_exec.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands class execute(base.Function): """ Executes process and substitutes its output. """ def __init__(self): # unlimited number of arguments, min 1 argument (the name of executable) super(execute, self).__init__("exec", 0, 1) def execute(self, args): if not super(execute, self).execute(args): return None (ret, out) = self._cmd.execute(args) if ret == 0: return out return None 07070100000172000081A40000000000000000000000016391BC3A000001D8000000000000000000000000000000000000004D00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_hex2cpulist.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands log = tuned.logs.get() class hex2cpulist(base.Function): """ Conversion function: converts hexadecimal CPU mask to CPU list """ def __init__(self): # 1 argument super(hex2cpulist, self).__init__("hex2cpulist", 1, 1) def execute(self, args): if not super(hex2cpulist, self).execute(args): return None return ",".join(str(v) for v in self._cmd.hex2cpulist(args[0])) 07070100000173000081A40000000000000000000000016391BC3A00000195000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_kb2s.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands class kb2s(base.Function): """ Conversion function: kbytes to sectors """ def __init__(self): # 1 argument super(kb2s, self).__init__("kb2s", 1, 1) def execute(self, args): if not super(kb2s, self).execute(args): return None try: return str(int(args[0]) * 2) except ValueError: return None 07070100000174000081A40000000000000000000000016391BC3A0000022A000000000000000000000000000000000000005600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_regex_search_ternary.pyimport re from . import base class regex_search_ternary(base.Function): """ Ternary regex operator, it takes arguments in the following form STR1, REGEX, STR2, STR3 If REGEX matches STR1 (re.search is used), STR2 is returned, otherwise STR3 is returned """ def __init__(self): # 4 arguments super(regex_search_ternary, self).__init__("regex_search_ternary", 4, 4) def execute(self, args): if not super(regex_search_ternary, self).execute(args): return None if re.search(args[1], args[0]): return args[2] else: return args[3] 07070100000175000081A40000000000000000000000016391BC3A000001A1000000000000000000000000000000000000004600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_s2kb.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands class s2kb(base.Function): """ Conversion function: sectors to kbytes """ def __init__(self): # 1 argument super(s2kb, self).__init__("s2kb", 1, 1) def execute(self, args): if not super(s2kb, self).execute(args): return None try: return str(int(round(int(args[0]) / 2))) except ValueError: return None 07070100000176000081A40000000000000000000000016391BC3A00000196000000000000000000000000000000000000004700000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_strip.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands class strip(base.Function): """ Makes string from all arguments and strip it """ def __init__(self): # unlimited number of arguments, min 1 argument super(strip, self).__init__("strip", 0, 1) def execute(self, args): if not super(strip, self).execute(args): return None return "".join(args).strip() 07070100000177000081A40000000000000000000000016391BC3A00000253000000000000000000000000000000000000004C00000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/function_virt_check.pyimport os import tuned.logs from . import base from tuned.utils.commands import commands class virt_check(base.Function): """ Checks whether running inside virtual machine (VM) or on bare metal. If running inside VM expands to argument 1, otherwise expands to argument 2 (even on error). """ def __init__(self): # 2 arguments super(virt_check, self).__init__("virt_check", 2, 2) def execute(self, args): if not super(virt_check, self).execute(args): return None (ret, out) = self._cmd.execute(["virt-what"]) if ret == 0 and len(out) > 0: return args[0] return args[1] 07070100000178000081A40000000000000000000000016391BC3A00000861000000000000000000000000000000000000004200000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/functions.pyimport os import re import glob from . import repository import tuned.logs import tuned.consts as consts from tuned.utils.commands import commands log = tuned.logs.get() cmd = commands() class Functions(): """ Built-in functions """ def __init__(self): self._repository = repository.Repository() self._parse_init() def _parse_init(self, s = ""): self._cnt = 0 self._str = s self._len = len(s) self._stack = [] self._esc = False def _curr_char(self): return self._str[self._cnt] if self._cnt < self._len else "" def _curr_substr(self, _len): return self._str[self._cnt:self._cnt + _len] def _push_pos(self, esc): self._stack.append((esc, self._cnt)) def _sub(self, a, b, s): self._str = self._str[:a] + s + self._str[b + 1:] self._len = len(self._str) self._cnt += len(s) - (b - a + 1) if self._cnt < 0: self._cnt = 0 def _process_func(self, _from): sl = re.split(r'(?<!\\):', self._str[_from:self._cnt]) if sl[0] != "${f": return sl = [str(v).replace("\:", ":") for v in sl] if not re.match(r'\w+$', sl[1]): log.error("invalid function name '%s'" % sl[1]) return try: f = self._repository.load_func(sl[1]) except ImportError: log.error("function '%s' not implemented" % sl[1]) return s = f.execute(sl[2:]) if s is None: return self._sub(_from, self._cnt, s) def _process(self, s): self._parse_init(s) while self._cnt < self._len: if self._curr_char() == "}": try: si = self._stack.pop() except IndexError: log.error("invalid variable syntax, non pair '}' in: '%s'" % s) return self._str # if not escaped if not si[0]: self._process_func(si[1]) elif self._curr_substr(2) == "${": self._push_pos(self._esc) if self._curr_char() == "\\": self._esc = True else: self._esc = False self._cnt += 1 if len(self._stack): log.error("invalid varialbe syntax, non pair '{' in: '%s'" % s) return self._str def expand(self, s): if s is None or s == "": return s # expand functions and convert all \${f:*} to ${f:*} (unescape) return re.sub(r'\\(\${f:.*})', r'\1', self._process(s)) 07070100000179000081A40000000000000000000000016391BC3A00000504000000000000000000000000000000000000004300000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/functions/repository.pyfrom tuned.utils.plugin_loader import PluginLoader from . import base import tuned.logs import tuned.consts as consts from tuned.utils.commands import commands log = tuned.logs.get() class Repository(PluginLoader): def __init__(self): super(Repository, self).__init__() self._functions = {} @property def functions(self): return self._functions def _set_loader_parameters(self): self._namespace = "tuned.profiles.functions" self._prefix = consts.FUNCTION_PREFIX self._interface = tuned.profiles.functions.base.Function def create(self, function_name): log.debug("creating function %s" % function_name) function_cls = self.load_plugin(function_name) function_instance = function_cls() self._functions[function_name] = function_instance return function_instance # loads function from plugin file and return it # if it is already loaded, just return it, it is not loaded again def load_func(self, function_name): if not function_name in self._functions: return self.create(function_name) return self._functions[function_name] def delete(self, function): assert isinstance(function, self._interface) log.debug("removing function %s" % function) for k, v in list(self._functions.items()): if v == function: del self._functions[k] 0707010000017A000081A40000000000000000000000016391BC3A000010A0000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/loader.pyimport tuned.profiles.profile import tuned.profiles.variables from tuned.utils.config_parser import ConfigParser, Error import tuned.consts as consts import os.path import collections import tuned.logs import re from tuned.profiles.exceptions import InvalidProfileException log = tuned.logs.get() class Loader(object): """ Profiles loader. """ __slots__ = ["_profile_locator", "_profile_merger", "_profile_factory", "_global_config", "_variables"] def __init__(self, profile_locator, profile_factory, profile_merger, global_config, variables): self._profile_locator = profile_locator self._profile_factory = profile_factory self._profile_merger = profile_merger self._global_config = global_config self._variables = variables def _create_profile(self, profile_name, config): return tuned.profiles.profile.Profile(profile_name, config) @classmethod def safe_name(cls, profile_name): return re.match(r'^[a-zA-Z0-9_.-]+$', profile_name) @property def profile_locator(self): return self._profile_locator def load(self, profile_names): if type(profile_names) is not list: profile_names = profile_names.split() profile_names = list(filter(self.safe_name, profile_names)) if len(profile_names) == 0: raise InvalidProfileException("No profile or invalid profiles were specified.") if len(profile_names) > 1: log.info("loading profiles: %s" % ", ".join(profile_names)) else: log.info("loading profile: %s" % profile_names[0]) profiles = [] processed_files = [] self._load_profile(profile_names, profiles, processed_files) if len(profiles) > 1: final_profile = self._profile_merger.merge(profiles) else: final_profile = profiles[0] final_profile.name = " ".join(profile_names) if "variables" in final_profile.units: self._variables.add_from_cfg(final_profile.units["variables"].options) del(final_profile.units["variables"]) # FIXME hack, do all variable expansions in one place self._expand_vars_in_devices(final_profile) self._expand_vars_in_regexes(final_profile) return final_profile def _expand_vars_in_devices(self, profile): for unit in profile.units: profile.units[unit].devices = self._variables.expand(profile.units[unit].devices) def _expand_vars_in_regexes(self, profile): for unit in profile.units: profile.units[unit].cpuinfo_regex = self._variables.expand(profile.units[unit].cpuinfo_regex) profile.units[unit].uname_regex = self._variables.expand(profile.units[unit].uname_regex) def _load_profile(self, profile_names, profiles, processed_files): for name in profile_names: filename = self._profile_locator.get_config(name, processed_files) if filename == "": continue if filename is None: raise InvalidProfileException("Cannot find profile '%s' in '%s'." % (name, list(reversed(self._profile_locator._load_directories)))) processed_files.append(filename) config = self._load_config_data(filename) profile = self._profile_factory.create(name, config) if "include" in profile.options: include_names = re.split(r"\s*[,;]\s*", self._variables.expand(profile.options.pop("include"))) self._load_profile(include_names, profiles, processed_files) profiles.append(profile) def _expand_profile_dir(self, profile_dir, string): return re.sub(r'(?<!\\)\$\{i:PROFILE_DIR\}', profile_dir, string) def _load_config_data(self, file_name): try: config_obj = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) config_obj.optionxform=str with open(file_name) as f: config_obj.read_file(f, file_name) except Error.__bases__ as e: raise InvalidProfileException("Cannot parse '%s'." % file_name, e) config = collections.OrderedDict() dir_name = os.path.dirname(file_name) for section in list(config_obj.sections()): config[section] = collections.OrderedDict() for option in config_obj.options(section): config[section][option] = config_obj.get(section, option, raw=True) config[section][option] = self._expand_profile_dir(dir_name, config[section][option]) if config[section].get("script") is not None: script_path = os.path.join(dir_name, config[section]["script"]) config[section]["script"] = [os.path.normpath(script_path)] return config 0707010000017B000081A40000000000000000000000016391BC3A00000EBA000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/locator.pyimport os import tuned.consts as consts from tuned.utils.config_parser import ConfigParser, Error class Locator(object): """ Profiles locator and enumerator. """ __slots__ = ["_load_directories"] def __init__(self, load_directories): if type(load_directories) is not list: raise TypeError("load_directories parameter is not a list") self._load_directories = load_directories @property def load_directories(self): return self._load_directories def _get_config_filename(self, *path_parts): path_parts = list(path_parts) + ["tuned.conf"] config_name = os.path.join(*path_parts) return os.path.normpath(config_name) def get_config(self, profile_name, skip_files=None): ret = None conditional_load = profile_name[0:1] == "-" if conditional_load: profile_name = profile_name[1:] for dir_name in reversed(self._load_directories): # basename is protection not to get out of the path config_file = self._get_config_filename(dir_name, os.path.basename(profile_name)) if skip_files is not None and config_file in skip_files: ret = "" continue if os.path.isfile(config_file): return config_file if conditional_load and ret is None: ret = "" return ret def check_profile_name_format(self, profile_name): return profile_name is not None and profile_name != "" and "/" not in profile_name def parse_config(self, profile_name): if not self.check_profile_name_format(profile_name): return None config_file = self.get_config(profile_name) if config_file is None: return None try: config = ConfigParser(delimiters=('='), inline_comment_prefixes=('#'), allow_no_value=True) config.optionxform = str with open(config_file) as f: config.read_string("[" + consts.MAGIC_HEADER_NAME + "]\n" + f.read()) return config except (IOError, OSError, Error) as e: return None # Get profile attributes (e.g. summary, description), attrs is list of requested attributes, # if it is not list it is converted to list, defvals is list of default values to return if # attribute is not found, it is also converted to list if it is not list. # Returns list of the following format [status, profile_name, attr1_val, attr2_val, ...], # status is boolean. def get_profile_attrs(self, profile_name, attrs, defvals = None): # check types try: attrs_len = len(attrs) except TypeError: attrs = [attrs] attrs_len = 1 try: defvals_len = len(defvals) except TypeError: defvals = [defvals] defvals_len = 1 # Extend defvals if needed, last value is used for extension if defvals_len < attrs_len: defvals = defvals + ([defvals[-1]] * (attrs_len - defvals_len)) config = self.parse_config(profile_name) if config is None: return [False, "", "", ""] main_unit_in_config = consts.PLUGIN_MAIN_UNIT_NAME in config.sections() vals = [True, profile_name] for (attr, defval) in zip(attrs, defvals): if attr == "" or attr is None: vals[0] = False vals = vals + [""] elif main_unit_in_config and attr in config.options(consts.PLUGIN_MAIN_UNIT_NAME): vals = vals + [config.get(consts.PLUGIN_MAIN_UNIT_NAME, attr, raw=True)] else: vals = vals + [defval] return vals def list_profiles(self): profiles = set() for dir_name in self._load_directories: try: for profile_name in os.listdir(dir_name): config_file = self._get_config_filename(dir_name, profile_name) if os.path.isfile(config_file): profiles.add(profile_name) except OSError: pass return profiles def get_known_names(self): return sorted(self.list_profiles()) def get_known_names_summary(self): return [(profile, self.get_profile_attrs(profile, [consts.PROFILE_ATTR_SUMMARY], [""])[2]) for profile in sorted(self.list_profiles())] 0707010000017C000081A40000000000000000000000016391BC3A00000888000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/merger.pyimport collections from functools import reduce class Merger(object): """ Tool for merging multiple profiles into one. """ def __init__(self): pass def merge(self, configs): """ Merge multiple configurations into one. If there are multiple units of the same type, option 'devices' is set for each unit with respect to eliminating any duplicate devices. """ merged_config = reduce(self._merge_two, configs) return merged_config def _merge_two(self, profile_a, profile_b): """ Merge two profiles. The configuration of units with matching names are updated with options from the newer profile. If the 'replace' options of the newer unit is 'True', all options from the older unit are dropped. """ profile_a.options.update(profile_b.options) for unit_name, unit in list(profile_b.units.items()): if unit.replace or unit_name not in profile_a.units: profile_a.units[unit_name] = unit else: profile_a.units[unit_name].type = unit.type profile_a.units[unit_name].enabled = unit.enabled profile_a.units[unit_name].devices = unit.devices if unit.devices_udev_regex is not None: profile_a.units[unit_name].devices_udev_regex = unit.devices_udev_regex if unit.cpuinfo_regex is not None: profile_a.units[unit_name].cpuinfo_regex = unit.cpuinfo_regex if unit.uname_regex is not None: profile_a.units[unit_name].uname_regex = unit.uname_regex if unit.script_pre is not None: profile_a.units[unit_name].script_pre = unit.script_pre if unit.script_post is not None: profile_a.units[unit_name].script_post = unit.script_post if unit.drop is not None: for option in unit.drop: profile_a.units[unit_name].options.pop(option, None) unit.drop = None if unit_name == "script" and profile_a.units[unit_name].options.get("script", None) is not None: script = profile_a.units[unit_name].options.get("script", None) profile_a.units[unit_name].options.update(unit.options) profile_a.units[unit_name].options["script"] = script + profile_a.units[unit_name].options["script"] else: profile_a.units[unit_name].options.update(unit.options) return profile_a 0707010000017D000081A40000000000000000000000016391BC3A0000046E000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/profile.pyimport tuned.profiles.unit import tuned.consts as consts import collections class Profile(object): """ Representation of a tuning profile. """ __slots__ = ["_name", "_options", "_units"] def __init__(self, name, config): self._name = name self._init_options(config) self._init_units(config) def _init_options(self, config): self._options = {} if consts.PLUGIN_MAIN_UNIT_NAME in config: self._options = dict(config[consts.PLUGIN_MAIN_UNIT_NAME]) def _init_units(self, config): self._units = collections.OrderedDict() for unit_name in config: if unit_name != consts.PLUGIN_MAIN_UNIT_NAME: new_unit = self._create_unit(unit_name, config[unit_name]) self._units[unit_name] = new_unit def _create_unit(self, name, config): return tuned.profiles.unit.Unit(name, config) @property def name(self): """ Profile name. """ return self._name @name.setter def name(self, value): self._name = value @property def units(self): """ Units included in the profile. """ return self._units @property def options(self): """ Profile global options. """ return self._options 0707010000017E000081A40000000000000000000000016391BC3A000009B9000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/unit.pyimport collections import re class Unit(object): """ Unit description. """ __slots__ = [ "_name", "_type", "_enabled", "_replace", "_drop", "_devices", "_devices_udev_regex", \ "_cpuinfo_regex", "_uname_regex", "_script_pre", "_script_post", "_options" ] def __init__(self, name, config): self._name = name self._type = config.pop("type", self._name) self._enabled = config.pop("enabled", True) in [True, "True", "true", 1, "1"] self._replace = config.pop("replace", False) in [True, "True", "true", 1, "1"] self._drop = config.pop("drop", None) if self._drop is not None: self._drop = re.split(r"\b\s*[,;]\s*", str(self._drop)) self._devices = config.pop("devices", "*") self._devices_udev_regex = config.pop("devices_udev_regex", None) self._cpuinfo_regex = config.pop("cpuinfo_regex", None) self._uname_regex = config.pop("uname_regex", None) self._script_pre = config.pop("script_pre", None) self._script_post = config.pop("script_post", None) self._options = collections.OrderedDict(config) @property def name(self): return self._name @property def type(self): return self._type @type.setter def type(self, value): self._type = value @property def enabled(self): return self._enabled @enabled.setter def enabled(self, value): self._enabled = value @property def replace(self): return self._replace @property def drop(self): return self._drop @drop.setter def drop(self, value): self._drop = value @property def devices(self): return self._devices @devices.setter def devices(self, value): self._devices = value @property def devices_udev_regex(self): return self._devices_udev_regex @devices_udev_regex.setter def devices_udev_regex(self, value): self._devices_udev_regex = value @property def cpuinfo_regex(self): return self._cpuinfo_regex @cpuinfo_regex.setter def cpuinfo_regex(self, value): self._cpuinfo_regex = value @property def uname_regex(self): return self._uname_regex @uname_regex.setter def uname_regex(self, value): self._uname_regex = value @property def script_pre(self): return self._script_pre @script_pre.setter def script_pre(self, value): self._script_pre = value @property def script_post(self): return self._script_post @script_post.setter def script_post(self, value): self._script_post = value @property def options(self): return self._options @options.setter def options(self, value): self._options = value 0707010000017F000081A40000000000000000000000016391BC3A00000916000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/profiles/variables.pyimport os import re import tuned.logs from .functions import functions as functions import tuned.consts as consts from tuned.utils.commands import commands from tuned.utils.config_parser import ConfigParser, Error log = tuned.logs.get() class Variables(): """ Storage and processing of variables used in profiles """ def __init__(self): self._cmd = commands() self._lookup_re = {} self._lookup_env = {} self._functions = functions.Functions() def _add_env_prefix(self, s, prefix): if s.find(prefix) == 0: return s return prefix + s def _check_var(self, variable): return re.match(r'\w+$',variable) def add_variable(self, variable, value): if value is None: return s = str(variable) if not self._check_var(variable): log.error("variable definition '%s' contains unallowed characters" % variable) return v = self.expand(value) # variables referenced by ${VAR}, $ can be escaped by two $, # i.e. the following will not expand: $${VAR} self._lookup_re[r'(?<!\\)\${' + re.escape(s) + r'}'] = v self._lookup_env[self._add_env_prefix(s, consts.ENV_PREFIX)] = v def add_from_file(self, filename): if not os.path.exists(filename): log.error("unable to find variables_file: '%s'" % filename) return try: config = ConfigParser(delimiters=('='), inline_comment_prefixes=('#'), allow_no_value=True) config.optionxform = str with open(filename) as f: config.read_string("[" + consts.MAGIC_HEADER_NAME + "]\n" + f.read(), filename) except Error: log.error("error parsing variables_file: '%s'" % filename) return for s in config.sections(): for o in config.options(s): self.add_variable(o, config.get(s, o, raw=True)) def add_from_cfg(self, cfg): for item in cfg: if str(item) == "include": self.add_from_file(os.path.normpath(cfg[item])) else: self.add_variable(item, cfg[item]) # expand static variables (no functions) def expand_static(self, value): return re.sub(r'\\(\${\w+})', r'\1', self._cmd.multiple_re_replace(self._lookup_re, value)) def expand(self, value): if value is None: return None # expand variables and convert all \${VAR} to ${VAR} (unescape) s = self.expand_static(str(value)) # expand built-in functions return self._functions.expand(s) def get_env(self): return self._lookup_env 07070100000180000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002A00000000tuned-2.19.0.29+git.b894a3e/tuned/storage07070100000181000081A40000000000000000000000016391BC3A0000008D000000000000000000000000000000000000003600000000tuned-2.19.0.29+git.b894a3e/tuned/storage/__init__.pyfrom tuned.storage.storage import Storage from tuned.storage.factory import Factory from tuned.storage.pickle_provider import PickleProvider 07070100000182000081A40000000000000000000000016391BC3A00000166000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/storage/factory.pyfrom . import interfaces from . import storage class Factory(interfaces.Factory): __slots__ = ["_storage_provider"] def __init__(self, storage_provider): self._storage_provider = storage_provider @property def provider(self): return self._storage_provider def create(self, namespace): return storage.Storage(self._storage_provider, namespace) 07070100000183000081A40000000000000000000000016391BC3A000001D9000000000000000000000000000000000000003800000000tuned-2.19.0.29+git.b894a3e/tuned/storage/interfaces.pyclass Factory(object): def create(self, namespace): raise NotImplementedError() class Provider(object): def set(self, namespace, option, value): raise NotImplementedError() def get(self, namespace, option, default=None): raise NotImplementedError() def unset(self, namespace, option): raise NotImplementedError() def clear(self): raise NotImplementedError() def load(self): raise NotImplementedError() def save(self): raise NotImplementedError() 07070100000184000081A40000000000000000000000016391BC3A00000584000000000000000000000000000000000000003D00000000tuned-2.19.0.29+git.b894a3e/tuned/storage/pickle_provider.pyfrom . import interfaces import tuned.logs import pickle import os import tuned.consts as consts log = tuned.logs.get() class PickleProvider(interfaces.Provider): __slots__ = ["_path", "_data"] def __init__(self, path=None): if path is None: path = consts.DEFAULT_STORAGE_FILE self._path = path self._data = {} def set(self, namespace, option, value): self._data.setdefault(namespace, {}) self._data[namespace][option] = value def get(self, namespace, option, default=None): self._data.setdefault(namespace, {}) return self._data[namespace].get(option, default) def unset(self, namespace, option): self._data.setdefault(namespace, {}) if option in self._data[namespace]: del self._data[namespace][option] def save(self): try: log.debug("Saving %s" % str(self._data)) with open(self._path, "wb") as f: pickle.dump(self._data, f) except (OSError, IOError) as e: log.error("Error saving storage file '%s': %s" % (self._path, e)) def load(self): try: with open(self._path, "rb") as f: self._data = pickle.load(f) except (OSError, IOError) as e: log.debug("Error loading storage file '%s': %s" % (self._path, e)) self._data = {} except EOFError: self._data = {} def clear(self): self._data.clear() try: os.unlink(self._path) except (OSError, IOError) as e: log.debug("Error removing storage file '%s': %s" % (self._path, e)) 07070100000185000081A40000000000000000000000016391BC3A000001E2000000000000000000000000000000000000003500000000tuned-2.19.0.29+git.b894a3e/tuned/storage/storage.pyclass Storage(object): __slots__ = ["_storage_provider", "_namespace"] def __init__(self, storage_provider, namespace): self._storage_provider = storage_provider self._namespace = namespace def set(self, option, value): self._storage_provider.set(self._namespace, option, value) def get(self, option, default=None): return self._storage_provider.get(self._namespace, option, default) def unset(self, option): self._storage_provider.unset(self._namespace, option) 07070100000186000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002800000000tuned-2.19.0.29+git.b894a3e/tuned/units07070100000187000081A40000000000000000000000016391BC3A00000017000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/tuned/units/__init__.pyfrom .manager import * 07070100000188000081A40000000000000000000000016391BC3A00001793000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tuned/units/manager.pyimport collections import os import re import traceback import tuned.exceptions import tuned.logs import tuned.plugins.exceptions import tuned.consts as consts from tuned.utils.global_config import GlobalConfig from tuned.utils.commands import commands log = tuned.logs.get() __all__ = ["Manager"] class Manager(object): """ Manager creates plugin instances and keeps a track of them. """ def __init__(self, plugins_repository, monitors_repository, def_instance_priority, hardware_inventory, config = None): super(Manager, self).__init__() self._plugins_repository = plugins_repository self._monitors_repository = monitors_repository self._def_instance_priority = def_instance_priority self._hardware_inventory = hardware_inventory self._instances = [] self._plugins = [] self._config = config or GlobalConfig() self._cmd = commands() @property def plugins(self): return self._plugins @property def instances(self): return self._instances @property def plugins_repository(self): return self._plugins_repository def _unit_matches_cpuinfo(self, unit): if unit.cpuinfo_regex is None: return True cpuinfo_string = self._config.get(consts.CFG_CPUINFO_STRING) if cpuinfo_string is None: cpuinfo_string = self._cmd.read_file("/proc/cpuinfo") return re.search(unit.cpuinfo_regex, cpuinfo_string, re.MULTILINE) is not None def _unit_matches_uname(self, unit): if unit.uname_regex is None: return True uname_string = self._config.get(consts.CFG_UNAME_STRING) if uname_string is None: uname_string = " ".join(os.uname()) return re.search(unit.uname_regex, uname_string, re.MULTILINE) is not None def create(self, instances_config): instance_info_list = [] for instance_name, instance_info in list(instances_config.items()): if not instance_info.enabled: log.debug("skipping disabled instance '%s'" % instance_name) continue if not self._unit_matches_cpuinfo(instance_info): log.debug("skipping instance '%s', cpuinfo does not match" % instance_name) continue if not self._unit_matches_uname(instance_info): log.debug("skipping instance '%s', uname does not match" % instance_name) continue instance_info.options.setdefault("priority", self._def_instance_priority) instance_info.options["priority"] = int(instance_info.options["priority"]) instance_info_list.append(instance_info) instance_info_list.sort(key=lambda x: x.options["priority"]) plugins_by_name = collections.OrderedDict() for instance_info in instance_info_list: instance_info.options.pop("priority") plugins_by_name[instance_info.type] = None for plugin_name, none in list(plugins_by_name.items()): try: plugin = self._plugins_repository.create(plugin_name) plugins_by_name[plugin_name] = plugin self._plugins.append(plugin) except tuned.plugins.exceptions.NotSupportedPluginException: log.info("skipping plugin '%s', not supported on your system" % plugin_name) continue except Exception as e: log.error("failed to initialize plugin %s" % plugin_name) log.exception(e) continue instances = [] for instance_info in instance_info_list: plugin = plugins_by_name[instance_info.type] if plugin is None: continue log.debug("creating '%s' (%s)" % (instance_info.name, instance_info.type)) new_instance = plugin.create_instance(instance_info.name, instance_info.devices, instance_info.devices_udev_regex, \ instance_info.script_pre, instance_info.script_post, instance_info.options) instances.append(new_instance) for instance in instances: instance.plugin.init_devices() instance.plugin.assign_free_devices(instance) instance.plugin.initialize_instance(instance) # At this point we should be able to start the HW events # monitoring/processing thread, without risking race conditions self._hardware_inventory.start_processing_events() self._instances.extend(instances) def _try_call(self, caller, exc_ret, f, *args, **kwargs): try: return f(*args, **kwargs) except Exception as e: trace = traceback.format_exc() log.error("BUG: Unhandled exception in %s: %s" % (caller, str(e))) log.error(trace) return exc_ret def destroy_all(self): for instance in self._instances: log.debug("destroying instance %s" % instance.name) self._try_call("destroy_all", None, instance.plugin.destroy_instance, instance) for plugin in self._plugins: log.debug("cleaning plugin '%s'" % plugin.name) self._try_call("destroy_all", None, plugin.cleanup) del self._plugins[:] del self._instances[:] def update_monitors(self): for monitor in self._monitors_repository.monitors: log.debug("updating monitor %s" % monitor) self._try_call("update_monitors", None, monitor.update) def start_tuning(self): for instance in self._instances: self._try_call("start_tuning", None, instance.apply_tuning) def verify_tuning(self, ignore_missing): ret = True for instance in self._instances: res = self._try_call("verify_tuning", False, instance.verify_tuning, ignore_missing) if res == False: ret = False return ret def update_tuning(self): for instance in self._instances: self._try_call("update_tuning", None, instance.update_tuning) # full_rollback is a helper telling plugins whether soft or full roll # back is needed, e.g. for bootloader plugin we need e.g grub.cfg # tuning to persist across reboots and restarts of the daemon, so in # this case the full_rollback is usually set to False, but we also # need to clean it all up when TuneD is disabled or the profile is # changed. In this case the full_rollback is set to True. In practice # it means to remove all temporal or helper files, unpatch third # party config files, etc. def stop_tuning(self, full_rollback = False): self._hardware_inventory.stop_processing_events() for instance in reversed(self._instances): self._try_call("stop_tuning", None, instance.unapply_tuning, full_rollback) 07070100000189000041ED0000000000000000000000026391BC3A00000000000000000000000000000000000000000000002800000000tuned-2.19.0.29+git.b894a3e/tuned/utils0707010000018A000081A40000000000000000000000016391BC3A00000000000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/tuned/utils/__init__.py0707010000018B000081A40000000000000000000000016391BC3A00003AB2000000000000000000000000000000000000003400000000tuned-2.19.0.29+git.b894a3e/tuned/utils/commands.pyimport errno import hashlib import tuned.logs import copy import os import shutil import tuned.consts as consts import re from subprocess import * from tuned.exceptions import TunedException log = tuned.logs.get() class commands: def __init__(self, logging = True): self._logging = logging def _error(self, msg): if self._logging: log.error(msg) def _debug(self, msg): if self._logging: log.debug(msg) def get_bool(self, value): v = str(value).upper().strip() return {"Y":"1", "YES":"1", "T":"1", "TRUE":"1", "N":"0", "NO":"0", "F":"0", "FALSE":"0"}.get(v, value) def remove_ws(self, s): return re.sub('\s+', ' ', str(s)).strip() def unquote(self, v): return re.sub("^\"(.*)\"$", r"\1", v) # escape escape character (by default '\') def escape(self, s, what_escape = "\\", escape_by = "\\"): return s.replace(what_escape, "%s%s" % (escape_by, what_escape)) # clear escape characters (by default '\') def unescape(self, s, escape_char = "\\"): return s.replace(escape_char, "") # add spaces to align s2 to pos, returns resulting string: s1 + spaces + s2 def align_str(self, s1, pos, s2): return s1 + " " * (pos - len(s1)) + s2 # convert dictionary 'd' to flat list and return it # it uses sort on the dictionary items to return consistent results # for directories with different inserte/delete history def dict2list(self, d): l = [] if d is not None: for i in sorted(d.items()): l += list(i) return l # Compile regex to speedup multiple_re_replace or re_lookup def re_lookup_compile(self, d): if d is None: return None return re.compile("(%s)" % ")|(".join(list(d.keys()))) # Do multiple regex replaces in 's' according to lookup table described by # dictionary 'd', e.g.: d = {"re1": "replace1", "re2": "replace2", ...} # r can be regex precompiled by re_lookup_compile for speedup def multiple_re_replace(self, d, s, r = None, flags = 0): if d is None: if r is None: return s else: if len(d) == 0 or s is None: return s if r is None: r = self.re_lookup_compile(d) return r.sub(lambda mo: list(d.values())[mo.lastindex - 1], s, flags) # Do regex lookup on 's' according to lookup table described by # dictionary 'd' and return corresponding value from the dictionary, # e.g.: d = {"re1": val1, "re2": val2, ...} # r can be regex precompiled by re_lookup_compile for speedup def re_lookup(self, d, s, r = None): if len(d) == 0 or s is None: return None if r is None: r = self.re_lookup_compile(d) mo = r.search(s) if mo: return list(d.values())[mo.lastindex - 1] return None def write_to_file(self, f, data, makedir = False, no_error = False): self._debug("Writing to file: '%s' < '%s'" % (f, data)) if makedir: d = os.path.dirname(f) if os.path.isdir(d): makedir = False try: if makedir: os.makedirs(d) fd = open(f, "w") fd.write(str(data)) fd.close() rc = True except (OSError,IOError) as e: rc = False if not no_error: self._error("Writing to file '%s' error: '%s'" % (f, e)) return rc def read_file(self, f, err_ret = "", no_error = False): old_value = err_ret try: f = open(f, "r") old_value = f.read() f.close() except (OSError,IOError) as e: if not no_error: self._error("Error when reading file '%s': '%s'" % (f, e)) self._debug("Read data from file: '%s' > '%s'" % (f, old_value)) return old_value def rmtree(self, f, no_error = False): self._debug("Removing tree: '%s'" % f) if os.path.exists(f): try: shutil.rmtree(f, no_error) except OSError as error: if not no_error: log.error("cannot remove tree '%s': '%s'" % (f, str(error))) return False return True def unlink(self, f, no_error = False): self._debug("Removing file: '%s'" % f) if os.path.exists(f): try: os.unlink(f) except OSError as error: if not no_error: log.error("cannot remove file '%s': '%s'" % (f, str(error))) return False return True def rename(self, src, dst, no_error = False): self._debug("Renaming file '%s' to '%s'" % (src, dst)) try: os.rename(src, dst) except OSError as error: if not no_error: log.error("cannot rename file '%s' to '%s': '%s'" % (src, dst, str(error))) return False return True def copy(self, src, dst, no_error = False): try: log.debug("copying file '%s' to '%s'" % (src, dst)) shutil.copy(src, dst) return True except IOError as e: if not no_error: log.error("cannot copy file '%s' to '%s': %s" % (src, dst, e)) return False def replace_in_file(self, f, pattern, repl): data = self.read_file(f) if len(data) <= 0: return False; return self.write_to_file(f, re.sub(pattern, repl, data, flags = re.MULTILINE)) # do multiple replaces in file 'f' by using dictionary 'd', # e.g.: d = {"re1": val1, "re2": val2, ...} def multiple_replace_in_file(self, f, d): data = self.read_file(f) if len(data) <= 0: return False; return self.write_to_file(f, self.multiple_re_replace(d, data, flags = re.MULTILINE)) # makes sure that options from 'd' are set to values from 'd' in file 'f', # when needed it edits options or add new options if they don't # exist and 'add' is set to True, 'd' has the following form: # d = {"option_1": value_1, "option_2": value_2, ...} def add_modify_option_in_file(self, f, d, add = True): data = self.read_file(f) for opt in d: o = str(opt) v = str(d[opt]) if re.search(r"\b" + o + r"\s*=.*$", data, flags = re.MULTILINE) is None: if add: if len(data) > 0 and data[-1] != "\n": data += "\n" data += "%s=\"%s\"\n" % (o, v) else: data = re.sub(r"\b(" + o + r"\s*=).*$", r"\1" + "\"" + self.escape(v) + "\"", data, flags = re.MULTILINE) return self.write_to_file(f, data) # calcualtes md5sum of file 'f' def md5sum(self, f): data = self.read_file(f) return hashlib.md5(str(data).encode("utf-8")).hexdigest() # calcualtes sha256sum of file 'f' def sha256sum(self, f): data = self.read_file(f) return hashlib.sha256(str(data).encode("utf-8")).hexdigest() # returns machine ID or empty string "" in case of error def get_machine_id(self, no_error = True): return self.read_file(consts.MACHINE_ID_FILE, no_error).strip() # "no_errors" can be list of return codes not treated as errors, if 0 is in no_errors, it means any error # returns (retcode, out), where retcode is exit code of the executed process or -errno if # OSError or IOError exception happened def execute(self, args, shell = False, cwd = None, env = {}, no_errors = [], return_err = False): retcode = 0 _environment = os.environ.copy() _environment["LC_ALL"] = "C" _environment.update(env) self._debug("Executing %s." % str(args)) out = "" err_msg = None try: proc = Popen(args, stdout = PIPE, stderr = PIPE, \ env = _environment, \ shell = shell, cwd = cwd, \ close_fds = True, \ universal_newlines = True) out, err = proc.communicate() retcode = proc.returncode if retcode and not retcode in no_errors and not 0 in no_errors: err_out = err[:-1] if len(err_out) == 0: err_out = out[:-1] err_msg = "Executing %s error: %s" % (args[0], err_out) if not return_err: self._error(err_msg) except (OSError, IOError) as e: retcode = -e.errno if e.errno is not None else -1 if not abs(retcode) in no_errors and not 0 in no_errors: err_msg = "Executing %s error: %s" % (args[0], e) if not return_err: self._error(err_msg) if return_err: return retcode, out, err_msg else: return retcode, out # Helper for parsing kernel options like: # [always] never # It will return 'always' def get_active_option(self, options, dosplit = True): m = re.match(r'.*\[([^\]]+)\].*', options) if m: return m.group(1) if dosplit: return options.split()[0] return options # Checks whether CPU is online def is_cpu_online(self, cpu): scpu = str(cpu) # CPU0 is always online return cpu == "0" or self.read_file("/sys/devices/system/cpu/cpu%s/online" % scpu, no_error = True).strip() == "1" # Converts hexadecimal CPU mask to CPU list def hex2cpulist(self, mask): if mask is None: return None mask = str(mask).replace(",", "") try: m = int(mask, 16) except ValueError: log.error("invalid hexadecimal mask '%s'" % str(mask)) return [] return self.bitmask2cpulist(m) # Converts an integer bitmask to a list of cpus (e.g. [0,3,4]) def bitmask2cpulist(self, mask): cpu = 0 cpus = [] while mask > 0: if mask & 1: cpus.append(cpu) mask >>= 1 cpu += 1 return cpus # Unpacks CPU list, i.e. 1-3 will be converted to 1, 2, 3, supports # hexmasks that needs to be prefixed by "0x". Hexmasks can have commas, # which will be removed. If combining hexmasks with CPU list they need # to be separated by ",,", e.g.: 0-3, 0xf,, 6. It also supports negation # cpus by specifying "^" or "!", e.g.: 0-5, ^3, will output the list as # "0,1,2,4,5" (excluding 3). Note: negation supports only cpu numbers. # If "strip_chars" is not None and l is not list, we try strip characters. # It should be string with list of chars that is send to string.strip method # Default is english single and double quotes ("') rhbz#1891036 def cpulist_unpack(self, l, strip_chars='\'"'): rl = [] if l is None: return l ll = l if type(ll) is not list: if strip_chars is not None: ll = str(ll).strip(strip_chars) ll = str(ll).split(",") ll2 = [] negation_list = [] hexmask = False hv = "" # Remove commas from hexmasks for v in ll: sv = str(v) if hexmask: if len(sv) == 0: hexmask = False ll2.append(hv) hv = "" else: hv += sv else: if sv[0:2].lower() == "0x": hexmask = True hv = sv elif sv and (sv[0] == "^" or sv[0] == "!"): nl = sv[1:].split("-") try: if (len(nl) > 1): negation_list += list(range( int(nl[0]), int(nl[1]) + 1 ) ) else: negation_list.append(int(sv[1:])) except ValueError: return [] else: if len(sv) > 0: ll2.append(sv) if len(hv) > 0: ll2.append(hv) for v in ll2: vl = v.split("-") if v[0:2].lower() == "0x": rl += self.hex2cpulist(v) else: try: if len(vl) > 1: rl += list(range(int(vl[0]), int(vl[1]) + 1)) else: rl.append(int(vl[0])) except ValueError: return [] cpu_list = sorted(list(set(rl))) # Remove negated cpus after expanding for cpu in negation_list: if cpu in cpu_list: cpu_list.remove(cpu) return cpu_list # Packs CPU list, i.e. 1, 2, 3 will be converted to 1-3. It unpacks the # CPU list through cpulist_unpack first, so see its description about the # details of the input syntax def cpulist_pack(self, l): l = self.cpulist_unpack(l) if l is None or len(l) == 0: return l i = 0 j = i rl = [] while i + 1 < len(l): if l[i + 1] - l[i] != 1: if j != i: rl.append(str(l[j]) + "-" + str(l[i])) else: rl.append(str(l[i])) j = i + 1 i += 1 if j + 1 < len(l): rl.append(str(l[j]) + "-" + str(l[-1])) else: rl.append(str(l[-1])) return rl # Inverts CPU list (i.e. makes its complement) def cpulist_invert(self, l): cpus = self.cpulist_unpack(l) online = self.cpulist_unpack(self.read_file("/sys/devices/system/cpu/online")) return list(set(online) - set(cpus)) # Converts CPU list to hexadecimal CPU mask def cpulist2hex(self, l): if l is None: return None ul = self.cpulist_unpack(l) if ul is None: return None m = self.cpulist2bitmask(ul) s = "%x" % m ls = len(s) if ls % 8 != 0: ls += 8 - ls % 8 s = s.zfill(ls) return ",".join(s[i:i + 8] for i in range(0, len(s), 8)) def cpulist2bitmask(self, l): m = 0 for v in l: m |= pow(2, v) return m def cpulist2string(self, l): return ",".join(str(v) for v in l) # Do not make balancing on patched Python 2 interpreter (rhbz#1028122). # It means less CPU usage on patchet interpreter. On non-patched interpreter # it is not allowed to sleep longer than 50 ms. def wait(self, terminate, time): try: return terminate.wait(time, False) except: return terminate.wait(time) def get_size(self, s): s = str(s).strip().upper() for unit in ["KB", "MB", "GB", ""]: unit_ix = s.rfind(unit) if unit_ix == -1: continue try: val = int(s[:unit_ix]) u = s[unit_ix:] if u == "KB": val *= 1024 elif u == "MB": val *= 1024 * 1024 elif u == "GB": val *= 1024 * 1024 * 1024 elif u != "": val = None return val except ValueError: return None def get_active_profile(self): profile_name = "" mode = "" try: with open(consts.ACTIVE_PROFILE_FILE, "r") as f: profile_name = f.read().strip() except IOError as e: if e.errno != errno.ENOENT: raise TunedException("Failed to read active profile: %s" % e) except (OSError, EOFError) as e: raise TunedException("Failed to read active profile: %s" % e) try: with open(consts.PROFILE_MODE_FILE, "r") as f: mode = f.read().strip() if mode not in ["", consts.ACTIVE_PROFILE_AUTO, consts.ACTIVE_PROFILE_MANUAL]: raise TunedException("Invalid value in file %s." % consts.PROFILE_MODE_FILE) except IOError as e: if e.errno != errno.ENOENT: raise TunedException("Failed to read profile mode: %s" % e) except (OSError, EOFError) as e: raise TunedException("Failed to read profile mode: %s" % e) if mode == "": manual = None else: manual = mode == consts.ACTIVE_PROFILE_MANUAL if profile_name == "": profile_name = None return (profile_name, manual) def save_active_profile(self, profile_name, manual): try: with open(consts.ACTIVE_PROFILE_FILE, "w") as f: if profile_name is not None: f.write(profile_name + "\n") except (OSError,IOError) as e: raise TunedException("Failed to save active profile: %s" % e.strerror) try: with open(consts.PROFILE_MODE_FILE, "w") as f: mode = consts.ACTIVE_PROFILE_MANUAL if manual else consts.ACTIVE_PROFILE_AUTO f.write(mode + "\n") except (OSError,IOError) as e: raise TunedException("Failed to save profile mode: %s" % e.strerror) def get_post_loaded_profile(self): profile_name = "" try: with open(consts.POST_LOADED_PROFILE_FILE, "r") as f: profile_name = f.read().strip() except IOError as e: if e.errno != errno.ENOENT: raise TunedException("Failed to read the active post-loaded profile: %s" % e) except (OSError, EOFError) as e: raise TunedException("Failed to read the active post-loaded profile: %s" % e) if profile_name == "": profile_name = None return profile_name def save_post_loaded_profile(self, profile_name): try: with open(consts.POST_LOADED_PROFILE_FILE, "w") as f: if profile_name is not None: f.write(profile_name + "\n") except (OSError,IOError) as e: raise TunedException("Failed to save the active post-loaded profile: %s" % e.strerror) 0707010000018C000081A40000000000000000000000016391BC3A00000619000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tuned/utils/config_parser.py# ConfigParser wrapper providing compatibility layer for python 2.7/3 try: python3 = True import configparser as cp except ImportError: python3 = False import ConfigParser as cp from StringIO import StringIO import re class Error(cp.Error): pass if python3: class ConfigParser(cp.ConfigParser): pass else: class ConfigParser(cp.ConfigParser): def __init__(self, delimiters=None, inline_comment_prefixes=None, strict=True, *args, **kwargs): delims = "".join(list(delimiters)) # REs taken from the python-2.7 ConfigParser self.OPTCRE = re.compile( r'(?P<option>[^' + delims + '\s][^' + delims + ']*)' r'\s*(?P<vi>[' + delims + '])\s*' r'(?P<value>.*)$' ) self.OPTCRE_NV = re.compile( r'(?P<option>[^' + delims + '\s][^' + delims + ']*)' r'\s*(?:' r'(?P<vi>[' + delims + '])\s*' r'(?P<value>.*))?$' ) cp.ConfigParser.__init__(self, *args, **kwargs) self._inline_comment_prefixes = inline_comment_prefixes or [] self._re = re.compile("\s+(%s).*" % ")|(".join(list(self._inline_comment_prefixes))) def read_string(self, string, source="<string>"): sfile = StringIO(string) self.read_file(sfile, source) def readfp(self, fp, filename=None): cp.ConfigParser.readfp(self, fp, filename) # remove inline comments all_sections = [self._defaults] all_sections.extend(self._sections.values()) for options in all_sections: for name, val in options.items(): options[name] = self._re.sub("", val) def read_file(self, f, source="<???>"): self.readfp(f, source) 0707010000018D000081A40000000000000000000000016391BC3A00000BE0000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tuned/utils/global_config.pyimport tuned.logs from tuned.utils.config_parser import ConfigParser, Error from tuned.exceptions import TunedException import tuned.consts as consts from tuned.utils.commands import commands __all__ = ["GlobalConfig"] log = tuned.logs.get() class GlobalConfig(): def __init__(self,config_file = consts.GLOBAL_CONFIG_FILE): self._cfg = {} self.load_config(file_name=config_file) self._cmd = commands() @staticmethod def get_global_config_spec(): """ Easy validation mimicking configobj Returns two dicts, firts with default values (default None) global_default[consts.CFG_SOMETHING] = consts.CFG_DEF_SOMETHING or None second with configobj function for value type (default "get" for string, others eg getboolean, getint) global_function[consts.CFG_SOMETHING] = consts.CFG_FUNC_SOMETHING or get } """ options = [opt for opt in dir(consts) if opt.startswith("CFG_") and not opt.startswith("CFG_FUNC_") and not opt.startswith("CFG_DEF_")] global_default = dict((getattr(consts, opt), getattr(consts, "CFG_DEF_" + opt[4:], None)) for opt in options) global_function = dict((getattr(consts, opt), getattr(consts, "CFG_FUNC_" + opt[4:], "get")) for opt in options) return global_default, global_function def load_config(self, file_name = consts.GLOBAL_CONFIG_FILE): """ Loads global configuration file. """ log.debug("reading and parsing global configuration file '%s'" % file_name) try: config_parser = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) config_parser.optionxform = str with open(file_name) as f: config_parser.read_string("[" + consts.MAGIC_HEADER_NAME + "]\n" + f.read(), file_name) self._cfg, _global_config_func = self.get_global_config_spec() for option in config_parser.options(consts.MAGIC_HEADER_NAME): if option in self._cfg: try: func = getattr(config_parser, _global_config_func[option]) self._cfg[option] = func(consts.MAGIC_HEADER_NAME, option) except Error: raise TunedException("Global TuneD configuration file '%s' is not valid." % file_name) else: log.info("Unknown option '%s' in global config file '%s'." % (option, file_name)) self._cfg[option] = config_parser.get(consts.MAGIC_HEADER_NAME, option, raw=True) except IOError as e: raise TunedException("Global TuneD configuration file '%s' not found." % file_name) except Error as e: raise TunedException("Error parsing global TuneD configuration file '%s'." % file_name) def get(self, key, default = None): return self._cfg.get(key, default) def get_bool(self, key, default = None): if self._cmd.get_bool(self.get(key, default)) == "1": return True return False def set(self, key, value): self._cfg[key] = value def get_size(self, key, default = None): val = self.get(key) if val is None: return default ret = self._cmd.get_size(val) if ret is None: log.error("Error parsing value '%s', using '%s'." %(val, default)) return default else: return ret 0707010000018E000081A40000000000000000000000016391BC3A00001646000000000000000000000000000000000000003300000000tuned-2.19.0.29+git.b894a3e/tuned/utils/nettool.py__all__ = ["ethcard"] import tuned.logs from subprocess import * import re log = tuned.logs.get() class Nettool: _advertise_values = { # [ half, full ] 10 : [ 0x001, 0x002 ], 100 : [ 0x004, 0x008 ], 1000 : [ 0x010, 0x020 ], 2500 : [ 0, 0x8000 ], 10000 : [ 0, 0x1000 ], "auto" : 0x03F } _disabled = False def __init__(self, interface): self._interface = interface; self.update() log.debug("%s: speed %s, full duplex %s, autoneg %s, link %s" % (interface, self.speed, self.full_duplex, self.autoneg, self.link)) log.debug("%s: supports: autoneg %s, modes %s" % (interface, self.supported_autoneg, self.supported_modes)) log.debug("%s: advertises: autoneg %s, modes %s" % (interface, self.advertised_autoneg, self.advertised_modes)) # def __del__(self): # if self.supported_autoneg: # self._set_advertise(self._advertise_values["auto"]) def _clean_status(self): self.speed = 0 self.full_duplex = False self.autoneg = False self.link = False self.supported_modes = [] self.supported_autoneg = False self.advertised_modes = [] self.advertised_autoneg = False def _calculate_mode(self, modes): mode = 0; for m in modes: mode += self._advertise_values[m[0]][ 1 if m[1] else 0 ] return mode def _set_autonegotiation(self, enable): if self.autoneg == enable: return True if not self.supported_autoneg: return False return 0 == call(["ethtool", "-s", self._interface, "autoneg", "on" if enable else "off"], close_fds=True) def _set_advertise(self, value): if not self._set_autonegotiation(True): return False return 0 == call(["ethtool", "-s", self._interface, "advertise", "0x%03x" % value], close_fds=True) def get_max_speed(self): max = 0 for mode in self.supported_modes: if mode[0] > max: max = mode[0] if max > 0: return max else: return 1000 def set_max_speed(self): if self._disabled or not self.supported_autoneg: return False #if self._set_advertise(self._calculateMode(self.supported_modes)): if self._set_advertise(self._advertise_values["auto"]): self.update() return True else: return False def set_speed(self, speed): if self._disabled or not self.supported_autoneg: return False mode = 0 for am in self._advertise_values: if am == "auto": continue if am <= speed: mode += self._advertise_values[am][0]; mode += self._advertise_values[am][1]; effective_mode = mode & self._calculate_mode(self.supported_modes) log.debug("%s: set_speed(%d) - effective_mode 0x%03x" % (self._interface, speed, effective_mode)) if self._set_advertise(effective_mode): self.update() return True else: return False def update(self): if self._disabled: return # run ethtool and preprocess output p_ethtool = Popen(["ethtool", self._interface], \ stdout=PIPE, stderr=PIPE, close_fds=True, \ universal_newlines = True) p_filter = Popen(["sed", "s/^\s*//;s/:\s*/:\\n/g"], \ stdin=p_ethtool.stdout, stdout=PIPE, \ universal_newlines = True, \ close_fds=True) output = p_filter.communicate()[0] errors = p_ethtool.communicate()[1] if errors != "": log.warning("%s: some errors were reported by 'ethtool'" % self._interface) log.debug("%s: %s" % (self._interface, errors.replace("\n", r"\n"))) self._clean_status() self._disabled = True return # parses output - kind of FSM self._clean_status() re_speed = re.compile(r"(\d+)") re_mode = re.compile(r"(\d+)baseT/(Half|Full)") state = "wait" for line in output.split("\n"): if line.endswith(":"): section = line[:-1] if section == "Speed": state = "speed" elif section == "Duplex": state = "duplex" elif section == "Auto-negotiation": state = "autoneg" elif section == "Link detected": state = "link" elif section == "Supported link modes": state = "supported_modes" elif section == "Supports auto-negotiation": state = "supported_autoneg" elif section == "Advertised link modes": state = "advertised_modes" elif section == "Advertised auto-negotiation": state = "advertised_autoneg" else: state = "wait" del section elif state == "speed": # Try to determine speed. If it fails, assume 1gbit ethernet try: self.speed = re_speed.match(line).group(1) except: self.speed = 1000 state = "wait" elif state == "duplex": self.full_duplex = line == "Full" state = "wait" elif state == "autoneg": self.autoneg = (line == "yes" or line == "on") state = "wait" elif state == "link": self.link = line == "yes" state = "wait" elif state == "supported_modes": # Try to determine supported modes. If it fails, assume 1gibt ethernet fullduplex works try: for m in line.split(): (s, d) = re_mode.match(m).group(1,2) self.supported_modes.append( (int(s), d == "Full") ) del m,s,d except: self.supported_modes.append((1000, True)) elif state == "supported_autoneg": self.supported_autoneg = line == "Yes" state = "wait" elif state == "advertised_modes": # Try to determine advertised modes. If it fails, assume 1gibt ethernet fullduplex works try: if line != "Not reported": for m in line.split(): (s, d) = re_mode.match(m).group(1,2) self.advertised_modes.append( (int(s), d == "Full") ) del m,s,d except: self.advertised_modes.append((1000, True)) elif state == "advertised_autoneg": self.advertised_autoneg = line == "Yes" state = "wait" def ethcard(interface): if not interface in ethcard.list: ethcard.list[interface] = Nettool(interface) return ethcard.list[interface] ethcard.list = {} 0707010000018F000081A40000000000000000000000016391BC3A00000703000000000000000000000000000000000000003900000000tuned-2.19.0.29+git.b894a3e/tuned/utils/plugin_loader.pyimport tuned.logs import os __all__ = ["PluginLoader"] log = tuned.logs.get() class PluginLoader(object): __slots__ = ["_namespace", "_prefix", "_interface"] def _set_loader_parameters(self): """ This method has to be implemented in child class and should set _namespace, _prefix, and _interface member attributes. """ raise NotImplementedError() def __init__(self): super(PluginLoader, self).__init__() self._namespace = None self._prefix = None self._interface = None self._set_loader_parameters() assert type(self._namespace) is str assert type(self._prefix) is str assert type(self._interface) is type and issubclass(self._interface, object) def load_plugin(self, plugin_name): assert type(plugin_name) is str module_name = "%s.%s%s" % (self._namespace, self._prefix, plugin_name) return self._get_class(module_name) def _get_class(self, module_name): log.debug("loading module %s" % module_name) module = __import__(module_name) path = module_name.split(".") path.pop(0) while len(path) > 0: module = getattr(module, path.pop(0)) for name in module.__dict__: cls = getattr(module, name) if type(cls) is type and issubclass(cls, self._interface): return cls raise ImportError("Cannot find the plugin class.") def load_all_plugins(self): plugins_package = __import__(self._namespace) plugin_clss = [] for module_name in os.listdir(plugins_package.plugins.__path__[0]): try: module_name = os.path.splitext(module_name)[0] if not module_name.startswith("plugin_"): continue plugin_class = self._get_class( "%s.%s" % (self._namespace, module_name) ) if plugin_class not in plugin_clss: plugin_clss.append(plugin_class) except ImportError: pass return plugin_clss 07070100000190000081A40000000000000000000000016391BC3A00000597000000000000000000000000000000000000003200000000tuned-2.19.0.29+git.b894a3e/tuned/utils/polkit.pyimport dbus import tuned.logs log = tuned.logs.get() class polkit(): def __init__(self): self._bus = dbus.SystemBus() self._proxy = self._bus.get_object('org.freedesktop.PolicyKit1', '/org/freedesktop/PolicyKit1/Authority', follow_name_owner_changes = True) self._authority = dbus.Interface(self._proxy, dbus_interface='org.freedesktop.PolicyKit1.Authority') def check_authorization(self, sender, action_id): """Check authorization, return codes: 1 - authorized 2 - polkit error, but authorized with fallback method 0 - unauthorized -1 - polkit error and unauthorized by the fallback method -2 - polkit error and unable to use the fallback method """ if sender is None or action_id is None: return False details = {} flags = 1 # AllowUserInteraction flag cancellation_id = "" # No cancellation id subject = ("system-bus-name", {"name" : sender}) try: ret = self._authority.CheckAuthorization(subject, action_id, details, flags, cancellation_id)[0] except (dbus.exceptions.DBusException, ValueError) as e: log.error("error querying polkit: %s" % e) # No polkit or polkit error, fallback to always allow root try: uid = self._bus.get_unix_user(sender) except dbus.exceptions.DBusException as e: log.error("error using falback authorization method: %s" % e) return -2 if uid == 0: return 2 else: return -1 return 1 if ret else 0 07070100000191000081A40000000000000000000000016391BC3A00001725000000000000000000000000000000000000003F00000000tuned-2.19.0.29+git.b894a3e/tuned/utils/profile_recommender.pyimport os import re import errno import procfs import subprocess from tuned.utils.config_parser import ConfigParser, Error try: import syspurpose.files have_syspurpose = True except: have_syspurpose = False import tuned.consts as consts import tuned.logs from tuned.utils.commands import commands log = tuned.logs.get() class ProfileRecommender: def __init__(self, is_hardcoded = False): self._is_hardcoded = is_hardcoded self._commands = commands() self._chassis_type = None def recommend(self): profile = consts.DEFAULT_PROFILE if self._is_hardcoded: return profile has_root = os.geteuid() == 0 if not has_root: log.warning("Profile recommender is running without root privileges. Profiles with virt recommendation condition will be omitted.") matching = self.process_config(consts.RECOMMEND_CONF_FILE, has_root=has_root) if matching is not None: return matching files = {} for directory in consts.RECOMMEND_DIRECTORIES: contents = [] try: contents = os.listdir(directory) except OSError as e: if e.errno != errno.ENOENT: log.error("error accessing %s: %s" % (directory, e)) for name in contents: path = os.path.join(directory, name) files[name] = path for name in sorted(files.keys()): path = files[name] matching = self.process_config(path, has_root=has_root) if matching is not None: return matching return profile def process_config(self, fname, has_root=True): matching_profile = None syspurpose_error_logged = False try: if not os.path.isfile(fname): return None config = ConfigParser(delimiters=('='), inline_comment_prefixes=('#')) config.optionxform = str with open(fname) as f: config.read_file(f, fname) for section in config.sections(): match = True for option in config.options(section): value = config.get(section, option, raw=True) if value == "": value = r"^$" if option == "virt": if not has_root: match = False break if not re.match(value, self._commands.execute(["virt-what"])[1], re.S): match = False elif option == "system": if not re.match(value, self._commands.read_file( consts.SYSTEM_RELEASE_FILE, no_error = True), re.S): match = False elif option[0] == "/": if not os.path.exists(option) or not re.match(value, self._commands.read_file(option), re.S): match = False elif option[0:7] == "process": ps = procfs.pidstats() ps.reload_threads() if len(ps.find_by_regex(re.compile(value))) == 0: match = False elif option == "chassis_type": chassis_type = self._get_chassis_type() if not re.match(value, chassis_type, re.IGNORECASE): match = False elif option == "syspurpose_role": role = "" if have_syspurpose: s = syspurpose.files.SyspurposeStore( syspurpose.files.USER_SYSPURPOSE, raise_on_error = True) try: s.read_file() role = s.contents["role"] except (IOError, OSError, KeyError) as e: if hasattr(e, "errno") and e.errno != errno.ENOENT: log.error("Failed to load the syspurpose\ file: %s" % e) else: if not syspurpose_error_logged: log.error("Failed to process 'syspurpose_role' in '%s'\ , the syspurpose module is not available" % fname) syspurpose_error_logged = True if re.match(value, role, re.IGNORECASE) is None: match = False if match: # remove the ",.*" suffix r = re.compile(r",[^,]*$") matching_profile = r.sub("", section) break except (IOError, OSError, Error) as e: log.error("error processing '%s', %s" % (fname, e)) return matching_profile def _get_chassis_type(self): if self._chassis_type is not None: log.debug("returning cached chassis type '%s'" % self._chassis_type) return self._chassis_type # Check DMI sysfs first # Based on SMBios 3.3.0 specs (https://www.dmtf.org/sites/default/files/standards/documents/DSP0134_3.3.0.pdf) DMI_CHASSIS_TYPES = ["", "Other", "Unknown", "Desktop", "Low Profile Desktop", "Pizza Box", "Mini Tower", "Tower", "Portable", "Laptop", "Notebook", "Hand Held", "Docking Station", "All In One", "Sub Notebook", "Space-saving", "Lunch Box", "Main Server Chassis", "Expansion Chassis", "Sub Chassis", "Bus Expansion Chassis", "Peripheral Chassis", "RAID Chassis", "Rack Mount Chassis", "Sealed-case PC", "Multi-system", "CompactPCI", "AdvancedTCA", "Blade", "Blade Enclosing", "Tablet", "Convertible", "Detachable", "IoT Gateway", "Embedded PC", "Mini PC", "Stick PC"] try: with open('/sys/devices/virtual/dmi/id/chassis_type', 'r') as sysfs_chassis_type: chassis_type_id = int(sysfs_chassis_type.read()) self._chassis_type = DMI_CHASSIS_TYPES[chassis_type_id] except IndexError: log.error("Unknown chassis type id read from dmi sysfs: %d" % chassis_type_id) except (OSError, IOError) as e: log.warn("error accessing dmi sysfs file: %s" % e) if self._chassis_type: log.debug("chassis type - %s" % self._chassis_type) return self._chassis_type # Fallback - try parsing dmidecode output try: p_dmi = subprocess.Popen(['dmidecode', '-s', 'chassis-type'], stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) (dmi_output, dmi_error) = p_dmi.communicate() if p_dmi.returncode: log.error("dmidecode finished with error (ret %d): '%s'" % (p_dmi.returncode, dmi_error)) else: self._chassis_type = dmi_output.strip().decode() except (OSError, IOError) as e: log.warn("error executing dmidecode tool : %s" % e) if not self._chassis_type: log.debug("could not determine chassis type.") self._chassis_type = "" else: log.debug("chassis type - %s" % self._chassis_type) return self._chassis_type 07070100000192000081A40000000000000000000000016391BC3A000000AB000000000000000000000000000000000000002D00000000tuned-2.19.0.29+git.b894a3e/tuned/version.pyTUNED_VERSION_MAJOR = 2 TUNED_VERSION_MINOR = 19 TUNED_VERSION_PATCH = 0 TUNED_VERSION_STR = "%d.%d.%d" % (TUNED_VERSION_MAJOR, TUNED_VERSION_MINOR, TUNED_VERSION_PATCH) 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!1987 blocks
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor