|
647 | 647 | " Hm0_list.append(resource.significant_wave_height(year_data.T))\n", |
648 | 648 | " Te_list.append(resource.energy_period(year_data.T))\n", |
649 | 649 | "\n", |
650 | | - "# Concatenate list of Series into a single DataFrame\n", |
| 650 | + "# Concatenate each list of Series into a single Series\n", |
651 | 651 | "Te = pd.concat(Te_list, axis=0)\n", |
652 | 652 | "Hm0 = pd.concat(Hm0_list, axis=0)\n", |
| 653 | + "\n", |
| 654 | + "# Name each Series and concat into a dataFrame\n", |
| 655 | + "Te.name = 'Te'\n", |
| 656 | + "Hm0.name = 'Hm0'\n", |
653 | 657 | "Hm0_Te = pd.concat([Hm0, Te], axis=1)\n", |
654 | 658 | "\n", |
655 | 659 | "# Drop any NaNs created from the calculation of Hm0 or Te\n", |
|
800 | 804 | "source": [ |
801 | 805 | "## Resource Clusters\n", |
802 | 806 | "\n", |
803 | | - "Often in resource characterization we want to pick a few representative sea state to run an alaysis. To do this with the resource data in python we reccomend using a Gaussian Mixture Model (a more generalized k-means clustering method). Using sckitlearn this is very straigth forward. We combine our Hm0 and Te data into an N x 2 numpy array. We specify our number of components (number of representative sea states) and then call the fit method on the data. Fianlly, using the methods `means_` and `weights` we can organize the results into an easily digestable table." |
| 807 | + "Often in resource characterization we want to pick a few representative sea state to run an alaysis. To do this with the resource data in python we reccomend using a Gaussian Mixture Model (a more generalized k-means clustering method). Using sckitlearn this is very straight forward. We combine our Hm0 and Te data into an N x 2 numpy array. We specify our number of components (number of representative sea states) and then call the fit method on the data. Fianlly, using the methods `means_` and `weights` we can organize the results into an easily digestable table." |
804 | 808 | ] |
805 | 809 | }, |
806 | 810 | { |
|
933 | 937 | " Hm0_list.append(resource.significant_wave_height(year_data.T))\n", |
934 | 938 | " Tp_list.append(resource.peak_period(year_data.T))\n", |
935 | 939 | "\n", |
936 | | - "# Concatenate list of Series into a single DataFrame\n", |
| 940 | + "# Concatenate each list of Series into a single Series\n", |
937 | 941 | "Tp = pd.concat(Tp_list, axis=0)\n", |
938 | 942 | "Hm0 = pd.concat(Hm0_list, axis=0)\n", |
| 943 | + "\n", |
| 944 | + "# Name each Series and concat into a dataFrame\n", |
| 945 | + "Tp.name = 'Tp'\n", |
| 946 | + "Hm0.name = 'Hm0'\n", |
939 | 947 | "Hm0_Tp = pd.concat([Hm0, Tp], axis=1)\n", |
940 | 948 | "\n", |
941 | 949 | "# Drop any NaNs created from the calculation of Hm0 or Te\n", |
|
1116 | 1124 | ], |
1117 | 1125 | "metadata": { |
1118 | 1126 | "kernelspec": { |
1119 | | - "display_name": "Python 3.9.13 ('.venv': venv)", |
| 1127 | + "display_name": "Python 3 (ipykernel)", |
1120 | 1128 | "language": "python", |
1121 | 1129 | "name": "python3" |
1122 | 1130 | }, |
|
1130 | 1138 | "name": "python", |
1131 | 1139 | "nbconvert_exporter": "python", |
1132 | 1140 | "pygments_lexer": "ipython3", |
1133 | | - "version": "3.11.7" |
| 1141 | + "version": "3.12.4" |
1134 | 1142 | }, |
1135 | 1143 | "vscode": { |
1136 | 1144 | "interpreter": { |
|
0 commit comments