-
Notifications
You must be signed in to change notification settings - Fork 19.6k
Fix: StringLookup returns torch native types for torch backend #21614
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Fix: StringLookup returns torch native types for torch backend #21614
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Ma-gi-cian, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses an issue in the Keras StringLookup
layer to ensure consistent output types when using the PyTorch backend. The changes refactor the layer's call
method to return native PyTorch tensors for forward lookups (string to integer) and standard Python lists for inverse lookups (integer to string). This aligns the layer's behavior with PyTorch's conventions and resolves a reported bug, improving compatibility and predictability. A new test case has also been added to validate these changes.
Highlights
- Consistent PyTorch Output: The
StringLookup
layer'scall
method has been refactored to ensure that when thetorch
backend is active, forward lookups (string-to-integer) consistently returntorch.Tensor
objects. - Python List for Inverse Lookup: For inverse lookups (integer-to-string), the
StringLookup
layer now returns a standard Pythonlist
of strings, mirroring the behavior oftorchtext.vocab.lookup_tokens
. - Improved Input Handling: The
call
method now explicitly handlestorch.Tensor
inputs, converting them to NumPy arrays and then to TensorFlow tensors for internal processing, before converting the final output back to the appropriate PyTorch-native type or Python list. - New Compatibility Test: A new test case,
test_torch_backend_compatibility
, has been added to validate the corrected behavior ofStringLookup
with the PyTorch backend and prevent future regressions.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly adjusts the StringLookup
layer to return PyTorch-native types when using the torch
backend, improving consistency. The changes ensure forward lookups return torch.Tensor
and inverse lookups return a Python list
, aligning with torchtext
behavior. The addition of test_torch_backend_compatibility
is great for ensuring this behavior is maintained.
My main feedback is to refactor the call
method in string_lookup.py
for better readability and to address a minor issue with an unreachable comment. The proposed refactoring simplifies the conditional logic without changing the functionality.
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #21614 +/- ##
=======================================
Coverage 82.45% 82.45%
=======================================
Files 572 572
Lines 57337 57348 +11
Branches 8970 8974 +4
=======================================
+ Hits 47277 47288 +11
Misses 7761 7761
Partials 2299 2299
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Closes #21255
Implementations:
-Refactored the call method in StringLookup to provide consistent, PyTorch-native outputs when using the torch backend.
-Ensures that forward lookups (string-to-int) now always return a torch.Tensor.
-Ensures that inverse lookups (int-to-string) now always return a Python list, aligning behavior with torchtext.vocab.lookup_tokens.
-Added test_torch_backend_compatibility to validate the fix and prevent future regressions.
Here is a test file along side the output to confirm its working :
Output: