Skip to content

Conversation

lina128
Copy link
Collaborator

@lina128 lina128 commented Oct 30, 2020

To see the logs from the Cloud Build CI, please join either our discussion or announcement mailing list.


This change is Reviewable

Copy link
Contributor

@tafsiri tafsiri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM but I think there are a few things that might be improvable. Feel free to ping me if you want to discuss types over gvc.

Reviewed 3 of 3 files at r1.
Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @annxingyuan, @jinjingforever, @lina128, and @tafsiri)


tfjs-backend-cpu/src/backend_cpu.ts, line 154 at r1 (raw file):

      }
    }
    return new TensorBuffer(t.shape, t.dtype, decodedData);

Just curious, why this change? (from factory method to constructor) (also see comment below about thoughts on the return type).


tfjs-backend-cpu/src/backend_cpu.ts, line 890 at r1 (raw file):

                    const pixel =
                        dyBuf.get(batch, dyDepth, dyRow, dyCol, channel) as

do you know why this needs to be casted now? Is it because of the new method signature for bufferSync, if so you may be able to resolve that by calling the constructor with something like new TensorBuffer(...) where rank is calculated from the TensorInfo. that might let the default float32 dtype flow. This may not quite be the solution, but I think it would be nice to keep the return type of bufferSync close to what it was before.


tfjs-backend-cpu/src/kernels/Reverse.ts, line 41 at r1 (raw file):

  }

  const buffer = new TensorBuffer(x.shape, x.dtype);

small renaming suggestion: buffer => outBuf (matching the style of xBuf and making its purpose more clear).

Copy link
Collaborator Author

@lina128 lina128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Yannick, thank you for discussing with me about TensorBuffer and how generic type (i.e. Rank and DataType) is used in our codebase. So the conclusion was we want to keep these two generic types because they can have nice type checking in some use cases, mostly for tensor1d, tensor2d, tensor3d, etc. For the tensorInfos, we define shape as number[] as oppose to ShapeMap[R], because the rank checking internally doesn't have much use cases for now.

Reviewable status: :shipit: complete! 1 of 1 approvals obtained (waiting on @annxingyuan, @jinjingforever, and @tafsiri)


tfjs-backend-cpu/src/backend_cpu.ts, line 154 at r1 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

Just curious, why this change? (from factory method to constructor) (also see comment below about thoughts on the return type).

Just for consistency with how it was called elsewhere. Also the tf.buffer's type check and assertion maybe unnecessary in the kernel world. But I don't have strong preference, WDYT?


tfjs-backend-cpu/src/backend_cpu.ts, line 890 at r1 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

do you know why this needs to be casted now? Is it because of the new method signature for bufferSync, if so you may be able to resolve that by calling the constructor with something like new TensorBuffer(...) where rank is calculated from the TensorInfo. that might let the default float32 dtype flow. This may not quite be the solution, but I think it would be nice to keep the return type of bufferSync close to what it was before.

Yeah, it was because I removed the type casting, add it back. Thank you for your help!


tfjs-backend-cpu/src/kernels/Reverse.ts, line 41 at r1 (raw file):

Previously, tafsiri (Yannick Assogba) wrote…

small renaming suggestion: buffer => outBuf (matching the style of xBuf and making its purpose more clear).

Done.

@lina128 lina128 merged commit 4c02beb into tensorflow:master Nov 2, 2020
@lina128 lina128 deleted the modularization branch November 2, 2020 19:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants