-
Notifications
You must be signed in to change notification settings - Fork 13.3k
Add more arguments to x perf
#126853
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more arguments to x perf
#126853
Changes from all commits
5cf01b8
d827acb
d5a01b5
fa340b4
a29b18a
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,12 +1,115 @@ | ||
use std::fmt::{Display, Formatter}; | ||
use std::process::Command; | ||
|
||
use crate::core::build_steps::compile::{Std, Sysroot}; | ||
use crate::core::build_steps::tool::RustcPerf; | ||
use crate::core::builder::Builder; | ||
use crate::core::config::DebuginfoLevel; | ||
|
||
/// Performs profiling or benchmarking with [`rustc-perf`](https://github.com/rust-lang/rustc-perf) | ||
/// using a locally built compiler. | ||
#[derive(Debug, Clone, clap::Parser)] | ||
pub struct PerfArgs { | ||
#[clap(subcommand)] | ||
cmd: PerfCommand, | ||
|
||
#[clap(flatten)] | ||
opts: SharedOpts, | ||
} | ||
|
||
impl Default for PerfArgs { | ||
fn default() -> Self { | ||
Self { cmd: PerfCommand::Eprintln, opts: SharedOpts::default() } | ||
} | ||
} | ||
|
||
#[derive(Debug, Clone, clap::Parser)] | ||
enum PerfCommand { | ||
/// Run `profile_local eprintln`. | ||
/// This executes the compiler on the given benchmarks and stores its stderr output. | ||
Eprintln, | ||
/// Run `profile_local samply` | ||
/// This executes the compiler on the given benchmarks and profiles it with `samply`. | ||
/// You need to install `samply`, e.g. using `cargo install --locked samply`. | ||
Samply, | ||
Comment on lines
+31
to
+34
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just ran There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, so this is a bit tricky. In general, many of the profilers require some custom configuration of the system, and also some postprocessing. This is documented in rustc-perf, and I'm not sure if it is a good idea to duplicate the documentation here. The tool should let you know exactly what you should do w.r.t. Furthermore, even if you run the profiler, it just generates a file on disk and unless you know what to do, it won't be very useful to you. So maybe we could create specific postprocessing steps in bootstrap, for example to open a web browser with the gathered profile after running samply. But even that is not that easy, because what if you run more than one benchmark? :) Doing profiling and benchmarking does require some specific knowledge at the moment. We could make it easier for the user, but I'm not sure if that code belongs here or into rustc-perf, and how far should we go in this. |
||
/// Run `profile_local cachegrind`. | ||
/// This executes the compiler on the given benchmarks under `Cachegrind`. | ||
Cachegrind, | ||
} | ||
|
||
impl PerfCommand { | ||
fn is_profiling(&self) -> bool { | ||
match self { | ||
PerfCommand::Eprintln | PerfCommand::Samply | PerfCommand::Cachegrind => true, | ||
} | ||
} | ||
} | ||
|
||
#[derive(Debug, Default, Clone, clap::Parser)] | ||
struct SharedOpts { | ||
/// Select the benchmarks that you want to run (separated by commas). | ||
/// If unspecified, all benchmarks will be executed. | ||
#[clap(long, global = true, value_delimiter = ',')] | ||
include: Vec<String>, | ||
/// Select the scenarios that should be benchmarked. | ||
#[clap( | ||
long, | ||
global = true, | ||
value_delimiter = ',', | ||
default_value = "Full,IncrFull,IncrUnchanged,IncrPatched" | ||
)] | ||
scenarios: Vec<Scenario>, | ||
/// Select the profiles that should be benchmarked. | ||
#[clap(long, global = true, value_delimiter = ',', default_value = "Check,Debug,Opt")] | ||
profiles: Vec<Profile>, | ||
} | ||
|
||
#[derive(Clone, Copy, Debug, clap::ValueEnum)] | ||
#[value(rename_all = "PascalCase")] | ||
enum Profile { | ||
Check, | ||
Debug, | ||
Doc, | ||
Opt, | ||
Clippy, | ||
} | ||
|
||
impl Display for Profile { | ||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { | ||
let name = match self { | ||
Profile::Check => "Check", | ||
Profile::Debug => "Debug", | ||
Profile::Doc => "Doc", | ||
Profile::Opt => "Opt", | ||
Profile::Clippy => "Clippy", | ||
}; | ||
f.write_str(name) | ||
} | ||
} | ||
|
||
#[derive(Clone, Copy, Debug, clap::ValueEnum)] | ||
#[value(rename_all = "PascalCase")] | ||
enum Scenario { | ||
Full, | ||
IncrFull, | ||
IncrUnchanged, | ||
IncrPatched, | ||
} | ||
|
||
impl Display for Scenario { | ||
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result { | ||
let name = match self { | ||
Scenario::Full => "Full", | ||
Scenario::IncrFull => "IncrFull", | ||
Scenario::IncrUnchanged => "IncrUnchanged", | ||
Scenario::IncrPatched => "IncrPatched", | ||
}; | ||
f.write_str(name) | ||
} | ||
} | ||
|
||
/// Performs profiling using `rustc-perf` on a built version of the compiler. | ||
pub fn perf(builder: &Builder<'_>) { | ||
pub fn perf(builder: &Builder<'_>, args: &PerfArgs) { | ||
let collector = builder.ensure(RustcPerf { | ||
compiler: builder.compiler(0, builder.config.build), | ||
target: builder.config.build, | ||
|
@@ -22,24 +125,46 @@ Consider setting `rust.debuginfo-level = 1` in `config.toml`."#); | |
let sysroot = builder.ensure(Sysroot::new(compiler)); | ||
let rustc = sysroot.join("bin/rustc"); | ||
|
||
let results_dir = builder.build.tempdir().join("rustc-perf"); | ||
let rustc_perf_dir = builder.build.tempdir().join("rustc-perf"); | ||
let profile_results_dir = rustc_perf_dir.join("results"); | ||
|
||
let mut cmd = Command::new(collector); | ||
let cmd = cmd | ||
.arg("profile_local") | ||
.arg("eprintln") | ||
.arg("--out-dir") | ||
.arg(&results_dir) | ||
.arg("--include") | ||
.arg("helloworld") | ||
.arg(&rustc); | ||
match &args.cmd { | ||
PerfCommand::Eprintln => { | ||
cmd.arg("profile_local").arg("eprintln"); | ||
} | ||
PerfCommand::Samply => { | ||
cmd.arg("profile_local").arg("samply"); | ||
} | ||
PerfCommand::Cachegrind => { | ||
cmd.arg("profile_local").arg("cachegrind"); | ||
} | ||
} | ||
if args.cmd.is_profiling() { | ||
cmd.arg("--out-dir").arg(&profile_results_dir); | ||
} | ||
|
||
if !args.opts.include.is_empty() { | ||
cmd.arg("--include").arg(args.opts.include.join(",")); | ||
} | ||
if !args.opts.profiles.is_empty() { | ||
cmd.arg("--profiles") | ||
.arg(args.opts.profiles.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(",")); | ||
} | ||
if !args.opts.scenarios.is_empty() { | ||
cmd.arg("--scenarios") | ||
.arg(args.opts.scenarios.iter().map(|p| p.to_string()).collect::<Vec<_>>().join(",")); | ||
} | ||
cmd.arg(&rustc); | ||
|
||
builder.info(&format!("Running `rustc-perf` using `{}`", rustc.display())); | ||
|
||
// We need to set the working directory to `src/tools/perf`, so that it can find the directory | ||
// with compile-time benchmarks. | ||
let cmd = cmd.current_dir(builder.src.join("src/tools/rustc-perf")); | ||
builder.build.run(cmd); | ||
builder.run(cmd); | ||
|
||
builder.info(&format!("You can find the results at `{}`", results_dir.display())); | ||
if args.cmd.is_profiling() { | ||
builder.info(&format!("You can find the results at `{}`", profile_results_dir.display())); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we run all the benchmarks by default with something like
PerfCommand::All
when the user doesn't specify a particular benchmark module (e.g.,x perf
)?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, these are individual profilers (
eprintln
,cachegrind
, etc.), not benchmarks. I don't think that it makes sense to run all of them at once, it's like runningcargo check/build/test
at once, they just do very different things and serve different purposes. We could select some default, but I'm not sure if there even is a default profiler.Once we actually add benchmarking in the future, I would propose
x perf
to just run benchmarks, but since we only have profilers now, I would opt into requiring users to select the profiler that they want to use.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO it makes more sense to run
samply
(perhaps even together withcachegrind
) rather than runningeprintln
by default. Currently theeprintln
default doesn't really provide a good overall report about the compiler.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, just to clarify, I would prefer to keep the current situation, where you cannot just do
x perf
, but you have to select some profiler. Because currently I don't think there is a good default - for bothsamply
andcachegrind
, you need to have an external tool installed, so that isn't IMO a good default.This
Default
implementation should be basicallyunreachable!
.