-
Notifications
You must be signed in to change notification settings - Fork 240
Add support for PowerPC VLE instruction set #6740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
cryptwhoa
wants to merge
19
commits into
Vector35:homegrown_powerpc
Choose a base branch
from
cryptwhoa:powerpc_vle
base: homegrown_powerpc
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit just brings it in and makes sure it builds, but doesn't change anything in the arch plugin to use it yet.
Decoding instructions don't need these, but it's useful for the architecture plugin.
We take a divergence from capstone: instead of treating every single branch pseudo-op as its own distinct instruction ID, we just group all of the BCx, BCLRx, and BCCTRx in a group, and just change the mnemonic depending on the value of BI and BO. This will drastically simplify the arch plugin for things like getting instruction info and lifting.
Note that this hasn't been tested against capstone 5.0.3 yet: I said "ooh it looks like binaryninja supports SPE now", added it, then when I went to test it, it looks like capstone 5.0.3 doesn't support it yet (`CS_MODE_SPE` isn't yet in the allowed bitmask for powerpc in the `arch_configs` table in `cs.c`). There's enough work that I'm leaving it in here for now.
This was getting unwieldly.
noone seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Arch: PowerPC
Issues with the PowerPC architecture plugin
Component: Architecture
Issue needs changes to an architecture plugin
Type: Enhancement
Issue is a small enhancement to existing functionality
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This adds support for PowerPC VLE to the homegrown PPC (ie non-capstone) branch.
Note that some instructions in the sample ELFs (see below) seem undecoded; this is due to issue #6290, because we don't support figuring out what type of vector instructions are supported yet, so the ELF recognizer doesn't know to decode them as SPE instructions. I expect to add support for this in a future PR, but this is the VLE instruction set by itself. Note that at the time I started writing this (maybe it's been fixed since then? probably not though if #6290 is still alive), it's a limitation of PPC binaries in general.
There was some grossness in getting the auto-VLE detection to work in ELFs. The mechanism used to distinguish VLE instructions is through page bits in the MMU. ELFs denote VLE sections the section's flag value. Discussions on slack indicated that the best way forward was to pass data from the ELF view to a platform recognizer via
Metadata
. Unfortunately, this seemed to require duplicating the section parsing logic.Note that binaryninja's ELF loader doesn't seem to have a clean way to set architectures of different sections. In theory, an ELF could have both VLE and non-VLE code sections in it, though I don't know how often these appear in practice. The auto-detection logic only sets the architecture to VLE if all of the executable sections have the VLE flag.
Currently, the PR focuses on big-endian 32-bit VLE. There are likely other architectures: VLE instruction decoding is always big endian, but the docs don't say anything about data endianness, so a little-endian 32-bit VLE is probably possible. There can also be 64-bit VLE, of big/little endian. When variants are added (SPE, Altivec, etc.), these architectures will start to balloon, for the whole matrix of:
so I'm punting on that complexity for #6290.
Datasheets:
VLEPEM.pdf (the main document for VLE)
VLE_addendum.pdf (documentation for a few more instructions)
Sample ELFs can be generated from a container here: https://github.com/AutomotiveDevOps/nxp-devkit-mpc57xx-docker. For convenience, here's a zipped up package of all the ELFs:
ppcvle_elfs.zip