Issues: pytorch/executorch
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
How to modify max seq len in android llama demo app?
module: examples
Issues related to demos under examples directory
#3674
opened May 20, 2024 by
CHNtentes
Issues lowering attention module to edge
module: exir
Issues related to Export IR
#3672
opened May 19, 2024 by
ismaeelbashir03
Not able to set up Executorch(Error: Could not find a version that satisfies the requirement torch==2.3.0)
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3670
opened May 18, 2024 by
CannoChen
Errors when lowering to edge.
module: exir
Issues related to Export IR
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3659
opened May 17, 2024 by
ismaeelbashir03
llama2 '8da4w-gptq' quantization fails
module: quantization
#3632
opened May 16, 2024 by
Liming-Wang
Llama example build failure on MacOS
bug
Something isn't working
high priority
module: build
Related to buck2 and cmake build
triage review
Items require an triage review
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3600
opened May 14, 2024 by
GregoryComer
Llama2-7B mobile app crashes on Samsung S23 8GB RAM
Android
Android building and execution related.
module: extension
Related to extension built on top of runtime, e.g. pybindings, data loader, etc.
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3599
opened May 14, 2024 by
salykova
Does llama2 example on Android utilize HTP?
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3586
opened May 11, 2024 by
CHNtentes
How can I use ExecuTorch to deploy a model to a MicroController,such as Infineon TC3xxx ?
new-backend
request to add new backend
#3585
opened May 11, 2024 by
AlexLuya
Ensure python version is compatible before building wheels
module: build
Related to buck2 and cmake build
module: doc
Related to our documentation, both in docs/ and docblocks
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3570
opened May 10, 2024 by
Ciao-Wen-Chen
Evaluation results of llama2 with exetorch
llm: evaluation
Perplexity, accuracy
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3568
opened May 10, 2024 by
l2002924700
Operator torch._ops.aten.linalg_vector_norm.default is not Aten Canonical
bug
Something isn't working
module: exir
Issues related to Export IR
module: kernels
Issues related to kernel libraries, e.g. portable kernels and optimized kernels
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3566
opened May 9, 2024 by
nbansal90
what's the meaning of "Groupwise 4-bit (128)"
module: quantization
module: xnnpack
Issues related to xnnpack delegation
rfc
Request for comment and feedback on a post, proposal, etc.
#3559
opened May 9, 2024 by
l2002924700
Exporting Llama3's tokenizer
bug
Something isn't working
module: doc
Related to our documentation, both in docs/ and docblocks
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3555
opened May 8, 2024 by
vifi2021
Support Phi 3 model
high priority
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3550
opened May 8, 2024 by
iseeyuan
ERROR: Overriding output data pointer allocated by memory plan is not allowed.
bug
Something isn't working
partner: qualcomm
For backend delegation, kernels, demo, etc. from the 3rd-party partner, Qualcomm
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
#3528
opened May 7, 2024 by
sunqijie0350
converting llama3 models with added tokens
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
#3519
opened May 6, 2024 by
l3utterfly
kv cache manipulation?
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
feature
A request for a proper, new feature.
#3518
opened May 6, 2024 by
l3utterfly
torch.max(input)
fails at XNNPACK runtime
bug
#3516
opened May 6, 2024 by
kinghchan
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.