fix: correct device and dtype mixup#1433
Open
xiuyuan18 wants to merge 2 commits into
Open
Conversation
fix the assignment logic in AutoTorchModule modified: diffsynth/core/vram/layers.py
Contributor
There was a problem hiding this comment.
Code Review
This pull request corrects a bug in the set_dtype_and_device method where the offload_device, onload_device, and preparing_device were incorrectly defaulting to computation_dtype instead of computation_device. The review feedback suggests further refining this logic to ensure that if any of the corresponding data types are set to "disk", the device should also default to "disk" to maintain compatibility with the disk offloading mechanisms.
When offload_dtype is set to "disk", the offload_device should also default to "disk" to ensure that the disk offloading logic in AutoWrappedModule (which checks self.offload_device == "disk") works correctly. The current fallback to computation_device would cause the onload and preparing methods to skip loading from disk, as they rely on this sentinel value to trigger load_from_disk. Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
fix the vram config fall back logic in class
AutoTorchModule, methodset_dtype_and_device. The previous method would assigncomputation_dtypetoonload_device,offload_deviceandpreparing_deviceif they areNone, which mixup dtype and device.modified: diffsynth/core/vram/layers.py