• About
  • Get Jnews
  • Contcat Us
Monday, March 27, 2023
various4news
No Result
View All Result
  • Login
  • News

    Breaking: Boeing Is Stated Shut To Issuing 737 Max Warning After Crash

    BREAKING: 189 individuals on downed Lion Air flight, ministry says

    Crashed Lion Air Jet Had Defective Velocity Readings on Final 4 Flights

    Police Officers From The K9 Unit Throughout A Operation To Discover Victims

    Folks Tiring of Demonstration, Besides Protesters in Jakarta

    Restricted underwater visibility hampers seek for flight JT610

    Trending Tags

    • Commentary
    • Featured
    • Event
    • Editorial
  • Politics
  • National
  • Business
  • World
  • Opinion
  • Tech
  • Science
  • Lifestyle
  • Entertainment
  • Health
  • Travel
  • News

    Breaking: Boeing Is Stated Shut To Issuing 737 Max Warning After Crash

    BREAKING: 189 individuals on downed Lion Air flight, ministry says

    Crashed Lion Air Jet Had Defective Velocity Readings on Final 4 Flights

    Police Officers From The K9 Unit Throughout A Operation To Discover Victims

    Folks Tiring of Demonstration, Besides Protesters in Jakarta

    Restricted underwater visibility hampers seek for flight JT610

    Trending Tags

    • Commentary
    • Featured
    • Event
    • Editorial
  • Politics
  • National
  • Business
  • World
  • Opinion
  • Tech
  • Science
  • Lifestyle
  • Entertainment
  • Health
  • Travel
No Result
View All Result
Morning News
No Result
View All Result
Home Artificial Intelligence

Monkey Patching Python Code

Rabiesaadawi by Rabiesaadawi
May 28, 2022
in Artificial Intelligence
0
Monkey Patching Python Code
0
SHARES
0
VIEWS
Share on FacebookShare on Twitter


Python is a dynamic scripting language. Not solely does it have a dynamic kind system the place a variable may be assigned to at least one kind first and altered later, however its object mannequin can be dynamic. This enables us to change its conduct at run time. A consequence of that is the potential of monkey patching. That is an concept that we are able to modify the bottom layer of a program with out modifying the higher-level code. Think about you should use the print() operate to print one thing to the display, and we are able to modify the definition of this operate to print it to a file with out modifying any single line of your code.

It’s attainable as a result of Python is an interpreted language, so we are able to make modifications whereas this system is working. We are able to make use of this property in Python to change the interface of a category or a module. It’s helpful if we’re coping with legacy code or code from different folks during which we don’t need to modify it extensively however nonetheless need to make it run with totally different variations of libraries or environments. On this tutorial, we’re going to see how we are able to apply this system to some Keras and TensorFlow code.

After ending this tutorial, you’ll be taught:

  • What’s monkey patching
  • The best way to change an object or a module in Python at runtime

Let’s get began.

Monkey Patching Python Code. Photograph by Juan Rumimpunu. Some rights reserved.

Tutorial Overview

This tutorial is in three elements; they’re:

READ ALSO

Detecting novel systemic biomarkers in exterior eye photographs – Google AI Weblog

Robotic caterpillar demonstrates new strategy to locomotion for gentle robotics — ScienceDaily

  • One mannequin, two interfaces
  • Extending an object with monkey patching
  • Monkey patching to revive legacy code

One Mannequin, Two Interfaces

TensorFlow is a big library. It gives a high-level Keras API to explain deep studying fashions in layers. It additionally comes with a whole lot of features for coaching, akin to totally different optimizers and knowledge turbines. It’s overwhelming to put in TensorFlow simply because we have to run our educated mannequin. Subsequently, TensorFlow gives us with a counterpart known as TensorFlow Lite that’s a lot smaller in dimension and appropriate to run in small units akin to cellular or embedded units.

We need to present how the unique TensorFlow Keras mannequin and the TensorFlow Lite mannequin are used in another way. So let’s make a mannequin of reasonable dimension, such because the LeNet-5 mannequin. Under is how we load the MNIST dataset and prepare a mannequin for classification:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

import numpy as np

import tensorflow as tf

from tensorflow.keras.datasets import mnist

from tensorflow.keras.fashions import Sequential

from tensorflow.keras.layers import Conv2D, Dense, AveragePooling2D, Dropout, Flatten

from tensorflow.keras.callbacks import EarlyStopping

 

# Load MNIST knowledge

(X_train, y_train), (X_test, y_test) = mnist.load_data()

 

# Reshape knowledge to form of (n_sample, peak, width, n_channel)

X_train = np.expand_dims(X_train, axis=3).astype(‘float32’)

X_test = np.expand_dims(X_test, axis=3).astype(‘float32’)

 

# LeNet5 mannequin: ReLU can be utilized intead of tanh

mannequin = Sequential([

    Conv2D(6, (5,5), input_shape=(28,28,1), padding=“same”, activation=“tanh”),

    AveragePooling2D((2,2), strides=2),

    Conv2D(16, (5,5), activation=“tanh”),

    AveragePooling2D((2,2), strides=2),

    Conv2D(120, (5,5), activation=“tanh”),

    Flatten(),

    Dense(84, activation=“tanh”),

    Dense(10, activation=“softmax”)

])

 

# Coaching

mannequin.compile(loss=“sparse_categorical_crossentropy”, optimizer=“adam”, metrics=[“sparse_categorical_accuracy”])

earlystopping = EarlyStopping(monitor=“val_loss”, persistence=4, restore_best_weights=True)

mannequin.match(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=32, callbacks=[earlystopping])

Operating the above code will obtain the MNIST dataset utilizing the TensorFlow’s dataset API and prepare the mannequin. Afterward, we are able to save the mannequin:

mannequin.save(“lenet5-mnist.h5”)

Or we are able to consider the mannequin with our take a look at set:

print(np.argmax(mannequin.predict(X_test), axis=1))

print(y_test)

Then we must always see:

[7 2 1 … 4 5 6]

[7 2 1 … 4 5 6]

But when we intend to make use of it with TensorFlow Lite, we need to convert it to the TensorFlow Lite format as follows:

# tflite conversion with dynamic vary optimization

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_keras_model(mannequin)

converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()

 

# Non-obligatory: Save the info for testing

import numpy as np

np.savez(‘mnist-test.npz’, X=X_test, y=y_test)

 

# Save the mannequin.

with open(‘lenet5-mnist.tflite’, ‘wb’) as f:

    f.write(tflite_model)

We are able to add extra choices to the converter, akin to lowering the mannequin to make use of a 16-bit floating level. However in all instances, the output of the conversion is a binary string. Not solely will the conversion cut back the mannequin to a a lot smaller dimension (in comparison with the scale of the HDF5 file saved from Keras), however it’ll additionally permit us to make use of it with a light-weight library. There are libraries for Android and iOS cellular units. In the event you’re utilizing embedded Linux, you could discover the tflite-runtime module from the PyPI repository (or you could compile one from TensorFlow supply code). Under is how we are able to use tflite-runtime to run the transformed mannequin:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

import numpy as np

import tflite_runtime.interpreter as tflite

 

loaded = np.load(‘mnist-test.npz’)

X_test = loaded[“X”]

y_test = loaded[“y”]

interpreter = tflite.Interpreter(model_path=“lenet5-mnist.tflite”)

interpreter.allocate_tensors()

input_details = interpreter.get_input_details()

output_details = interpreter.get_output_details()

print(input_details[0][‘shape’])

 

rows = []

for n in vary(len(X_test)):

    # this mannequin has single enter and single output

    interpreter.set_tensor(input_details[0][‘index’], X_test[n:n+1])

    interpreter.invoke()

    row = interpreter.get_tensor(output_details[0][‘index’])

    rows.append(row)

rows = np.vstack(rows)

 

accuracy = np.sum(np.argmax(rows, axis=1) == y_test) / len(y_test)

print(accuracy)

In actual fact, the bigger TensorFlow library can even run the transformed mannequin in a really related syntax:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

import numpy as np

import tensorflow as tf

 

interpreter = tf.lite.Interpreter(model_path=“lenet5-mnist.tflite”)

interpreter.allocate_tensors()

input_details = interpreter.get_input_details()

output_details = interpreter.get_output_details()

 

rows = []

for n in vary(len(X_test)):

    # this mannequin has single enter and single output

    interpreter.set_tensor(input_details[0][‘index’], X_test[n:n+1])

    interpreter.invoke()

    row = interpreter.get_tensor(output_details[0][‘index’])

    rows.append(row)

rows = np.vstack(rows)

 

accuracy = np.sum(np.argmax(rows, axis=1) == y_test) / len(y_test)

print(accuracy)

Notice the other ways of utilizing the fashions: Within the Keras mannequin, we’ve got the predict() operate that takes a batch as enter and returns a outcome. Within the TensorFlow Lite mannequin, nevertheless, we’ve got to inject one enter tensor at a time to the “interpreter” and invoke it, then retrieve the outcome.

Placing the whole lot collectively, the code under is how we construct a Keras mannequin, prepare it, convert it to TensorFlow Lite format, and take a look at with the transformed mannequin:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

import numpy as np

import tensorflow as tf

from tensorflow.keras.datasets import mnist

from tensorflow.keras.fashions import Sequential

from tensorflow.keras.layers import Conv2D, Dense, AveragePooling2D, Dropout, Flatten

from tensorflow.keras.callbacks import EarlyStopping

 

# Load MNIST knowledge

(X_train, y_train), (X_test, y_test) = mnist.load_data()

 

# Reshape knowledge to form of (n_sample, peak, width, n_channel)

X_train = np.expand_dims(X_train, axis=3).astype(‘float32’)

X_test = np.expand_dims(X_test, axis=3).astype(‘float32’)

 

# LeNet5 mannequin: ReLU can be utilized intead of tanh

mannequin = Sequential([

    Conv2D(6, (5,5), input_shape=(28,28,1), padding=“same”, activation=“tanh”),

    AveragePooling2D((2,2), strides=2),

    Conv2D(16, (5,5), activation=“tanh”),

    AveragePooling2D((2,2), strides=2),

    Conv2D(120, (5,5), activation=“tanh”),

    Flatten(),

    Dense(84, activation=“tanh”),

    Dense(10, activation=“softmax”)

])

 

# Coaching

mannequin.compile(loss=“sparse_categorical_crossentropy”, optimizer=“adam”, metrics=[“sparse_categorical_accuracy”])

earlystopping = EarlyStopping(monitor=“val_loss”, persistence=4, restore_best_weights=True)

mannequin.match(X_train, y_train, validation_data=(X_test, y_test), epochs=100, batch_size=32, callbacks=[earlystopping])

 

# Save mannequin

mannequin.save(“lenet5-mnist.h5”)

 

# Examine the prediction vs take a look at knowledge

print(np.argmax(mannequin.predict(X_test), axis=1))

print(y_test)

 

# tflite conversion with dynamic vary optimization

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_keras_model(mannequin)

converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()

 

# Non-obligatory: Save the info for testing

import numpy as np

np.savez(‘mnist-test.npz’, X=X_test, y=y_test)

 

# Save the tflite mannequin.

with open(‘lenet5-mnist.tflite’, ‘wb’) as f:

    f.write(tflite_model)

 

# Load the tflite mannequin and run take a look at

interpreter = tf.lite.Interpreter(model_path=“lenet5-mnist.tflite”)

interpreter.allocate_tensors()

input_details = interpreter.get_input_details()

output_details = interpreter.get_output_details()

 

rows = []

for n in vary(len(X_test)):

    # this mannequin has single enter and single output

    interpreter.set_tensor(input_details[0][‘index’], X_test[n:n+1])

    interpreter.invoke()

    row = interpreter.get_tensor(output_details[0][‘index’])

    rows.append(row)

rows = np.vstack(rows)

 

accuracy = np.sum(np.argmax(rows, axis=1) == y_test) / len(y_test)

print(accuracy)

Extending an Object with Monkey Patching

Can we use predict() within the TensorFlow Lite interpreter?

The interpreter object doesn’t have such a operate. However since we’re utilizing Python, it’s attainable for us so as to add it utilizing the monkey patching method. To know what we’re doing, first, we’ve got to notice that the interpreter object we outlined within the earlier code might include many attributes and features. After we name interpreter.predict() like a operate, Python will search for the one with such a reputation inside the thing, then execute it. If no such title is discovered, Python will elevate the AttributeError exception:

That provides:

Traceback (most up-to-date name final):

  File “/Customers/MLM/pred_error.py”, line 13, in <module>

    interpreter.predict()

AttributeError: ‘Interpreter’ object has no attribute ‘predict’

To make this work, we have to add a operate to the interpreter object with the title predict, and that ought to behave like one when it’s invoked. To make issues easy, we discover that our mannequin is a sequential one with an array as enter and returns an array of softmax outcomes as output. So we are able to write a predict() operate that behaves just like the one from the Keras mannequin, however utilizing the TensorFlow Lite interpreter:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

...

 

# Monkey patching the tflite mannequin

def predict(self, input_batch):

    batch_size = len(input_batch)

    output = []

 

    input_details = self.get_input_details()

    output_details = self.get_output_details()

    # Run every pattern from the batch

    for pattern in vary(batch_size):

        self.set_tensor(input_details[0][“index”], input_batch[sample:sample+1])

        self.invoke()

        sample_output = self.get_tensor(output_details[0][“index”])

        output.append(sample_output)

 

    # vstack the output of every pattern

    return np.vstack(output)

 

interpreter.predict = predict.__get__(interpreter)

The final line above assigns the operate we created to the interpreter object, with the title predict. The __get__(interpreter) half is required to make a operate we outlined to change into a member operate of the thing interpreter.

With these, we are able to now run a batch:

...

out_proba = interpreter.predict(X_test)

out = np.argmax(out_proba, axis=1)

print(out)

 

accuracy = np.sum(out == y_test) / len(y_test)

print(accuracy)

That is attainable as a result of Python has a dynamic object mannequin. We are able to modify attributes or member features of an object at runtime. In actual fact, this could not shock us. A Keras mannequin must run mannequin.compile() earlier than we are able to run mannequin.match(). One impact of the compile operate is so as to add the attribute loss to the mannequin to carry the loss operate. That is achieved at runtime.

With the predict() operate added to the interpreter object, we are able to move across the interpreter object identical to a educated Keras mannequin for prediction. Whereas they’re totally different behind the scenes, they share the identical interface so different features can use it with out modifying any line of code.

Under is the entire code to load our saved TensorFlow Lite mannequin, then monkey patch the predict() operate to it to make it appear like a Keras mannequin:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

import numpy as np

import tensorflow as tf

from tensorflow.keras.datasets import mnist

 

# Load MNIST knowledge and reshape

(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = np.expand_dims(X_train, axis=3).astype(‘float32’)

X_test = np.expand_dims(X_test, axis=3).astype(‘float32’)

 

# Monkey patching the tflite mannequin

def predict(self, input_batch):

    batch_size = len(input_batch)

    output = []

 

    input_details = self.get_input_details()

    output_details = self.get_output_details()

    # Run every pattern from the batch

    for pattern in vary(batch_size):

        self.set_tensor(input_details[0][“index”], input_batch[sample:sample+1])

        self.invoke()

        sample_output = self.get_tensor(output_details[0][“index”])

        output.append(sample_output)

 

    # vstack the output of every pattern

    return np.vstack(output)

 

# Load and monkey patch

interpreter = tf.lite.Interpreter(model_path=“lenet5-mnist.tflite”)

interpreter.predict = predict.__get__(interpreter)

interpreter.allocate_tensors()

 

# take a look at output

out_proba = interpreter.predict(X_test)

out = np.argmax(out_proba, axis=1)

print(out)

accuracy = np.sum(out == y_test) / len(y_test)

print(accuracy)

Monkey Patching to Revive Legacy Code

We may give another instance of monkey patching in Python. Contemplate the next code:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

# https://machinelearningmastery.com/dropout-regularization-deep-learning-models-keras/

# Instance of Dropout on the Sonar Dataset: Hidden Layer

from pandas import read_csv

from keras.fashions import Sequential

from keras.layers import Dense

from keras.layers import Dropout

from keras.wrappers.scikit_learn import KerasClassifier

from keras.constraints import maxnorm

from keras.optimizers import SGD

from sklearn.model_selection import cross_val_score

from sklearn.preprocessing import LabelEncoder

from sklearn.model_selection import StratifiedKFold

from sklearn.preprocessing import StandardScaler

from sklearn.pipeline import Pipeline

# load dataset

dataframe = read_csv(“sonar.csv”, header=None)

dataset = dataframe.values

# cut up into enter (X) and output (Y) variables

X = dataset[:,0:60].astype(float)

Y = dataset[:,60]

# encode class values as integers

encoder = LabelEncoder()

encoder.match(Y)

encoded_Y = encoder.rework(Y)

 

# dropout in hidden layers with weight constraint

def create_model():

# create mannequin

mannequin = Sequential()

mannequin.add(Dense(60, input_dim=60, activation=‘relu’, kernel_constraint=maxnorm(3)))

mannequin.add(Dropout(0.2))

mannequin.add(Dense(30, activation=‘relu’, kernel_constraint=maxnorm(3)))

mannequin.add(Dropout(0.2))

mannequin.add(Dense(1, activation=‘sigmoid’))

# Compile mannequin

sgd = SGD(lr=0.1, momentum=0.9)

mannequin.compile(loss=‘binary_crossentropy’, optimizer=sgd, metrics=[‘accuracy’])

return mannequin

 

estimators = []

estimators.append((‘standardize’, StandardScaler()))

estimators.append((‘mlp’, KerasClassifier(build_fn=create_model, epochs=300, batch_size=16, verbose=0)))

pipeline = Pipeline(estimators)

kfold = StratifiedKFold(n_splits=10, shuffle=True)

outcomes = cross_val_score(pipeline, X, encoded_Y, cv=kfold)

print(“Hidden: %.2f%% (%.2f%%)” % (outcomes.imply()*100, outcomes.std()*100))

This code was written just a few years again and assumes an older model of Keras with TensorFlow 1.x. The information file sonar.csv may be present in the opposite submit. If we run this code with TensorFlow 2.5, we’ll see the difficulty of an ImportError on the road of SGD. We have to make two modifications at a minimal within the above code in an effort to make it run:

  1. Features and lessons must be imported from tensorflow.keras as a substitute of keras
  2. The constraint class maxnorm must be in camel case, MaxNorm

The next is the up to date code, during which we modified solely the import statements:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

# Instance of Dropout on the Sonar Dataset: Hidden Layer

from pandas import read_csv

from tensorflow.keras.fashions import Sequential

from tensorflow.keras.layers import Dense

from tensorflow.keras.layers import Dropout

from tensorflow.keras.wrappers.scikit_learn import KerasClassifier

from tensorflow.keras.constraints import MaxNorm as maxnorm

from tensorflow.keras.optimizers import SGD

from sklearn.model_selection import cross_val_score

from sklearn.preprocessing import LabelEncoder

from sklearn.model_selection import StratifiedKFold

from sklearn.preprocessing import StandardScaler

from sklearn.pipeline import Pipeline

# load dataset

dataframe = read_csv(“sonar.csv”, header=None)

dataset = dataframe.values

# cut up into enter (X) and output (Y) variables

X = dataset[:,0:60].astype(float)

Y = dataset[:,60]

# encode class values as integers

encoder = LabelEncoder()

encoder.match(Y)

encoded_Y = encoder.rework(Y)

 

# dropout in hidden layers with weight constraint

def create_model():

# create mannequin

mannequin = Sequential()

mannequin.add(Dense(60, input_dim=60, activation=‘relu’, kernel_constraint=maxnorm(3)))

mannequin.add(Dropout(0.2))

mannequin.add(Dense(30, activation=‘relu’, kernel_constraint=maxnorm(3)))

mannequin.add(Dropout(0.2))

mannequin.add(Dense(1, activation=‘sigmoid’))

# Compile mannequin

sgd = SGD(lr=0.1, momentum=0.9)

mannequin.compile(loss=‘binary_crossentropy’, optimizer=sgd, metrics=[‘accuracy’])

return mannequin

 

estimators = []

estimators.append((‘standardize’, StandardScaler()))

estimators.append((‘mlp’, KerasClassifier(build_fn=create_model, epochs=300, batch_size=16, verbose=0)))

pipeline = Pipeline(estimators)

kfold = StratifiedKFold(n_splits=10, shuffle=True)

outcomes = cross_val_score(pipeline, X, encoded_Y, cv=kfold)

print(“Hidden: %.2f%% (%.2f%%)” % (outcomes.imply()*100, outcomes.std()*100))

If we’ve got a a lot greater mission with a whole lot of scripts, it will be tedious to change each single line of import. However Python’s module system is only a dictionary at sys.modules. Subsequently we are able to monkey patch it to make the previous code match with the brand new library. The next is how we do it. This works for TensorFlow 2.5 installations (this backward compatibility difficulty of Keras code was mounted in TensorFlow 2.9; therefore you don’t want this patching within the newest model of libraries):

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

# monkey patching

import sys

import tensorflow.keras

tensorflow.keras.constraints.maxnorm = tensorflow.keras.constraints.MaxNorm

for x in sys.modules.keys():

    if x.startswith(“tensorflow.keras”):

        sys.modules[x[len(“tensorflow.”):]] = sys.modules[x]

 

# Previous code under:

 

# Instance of Dropout on the Sonar Dataset: Hidden Layer

from pandas import read_csv

from keras.fashions import Sequential

from keras.layers import Dense

from keras.layers import Dropout

from keras.wrappers.scikit_learn import KerasClassifier

from keras.constraints import maxnorm

from keras.optimizers import SGD

from sklearn.model_selection import cross_val_score

from sklearn.preprocessing import LabelEncoder

from sklearn.model_selection import StratifiedKFold

from sklearn.preprocessing import StandardScaler

from sklearn.pipeline import Pipeline

# load dataset

dataframe = read_csv(“sonar.csv”, header=None)

dataset = dataframe.values

# cut up into enter (X) and output (Y) variables

X = dataset[:,0:60].astype(float)

Y = dataset[:,60]

# encode class values as integers

encoder = LabelEncoder()

encoder.match(Y)

encoded_Y = encoder.rework(Y)

 

# dropout in hidden layers with weight constraint

def create_model():

# create mannequin

mannequin = Sequential()

mannequin.add(Dense(60, input_dim=60, activation=‘relu’, kernel_constraint=maxnorm(3)))

mannequin.add(Dropout(0.2))

mannequin.add(Dense(30, activation=‘relu’, kernel_constraint=maxnorm(3)))

mannequin.add(Dropout(0.2))

mannequin.add(Dense(1, activation=‘sigmoid’))

# Compile mannequin

sgd = SGD(lr=0.1, momentum=0.9)

mannequin.compile(loss=‘binary_crossentropy’, optimizer=sgd, metrics=[‘accuracy’])

return mannequin

 

estimators = []

estimators.append((‘standardize’, StandardScaler()))

estimators.append((‘mlp’, KerasClassifier(build_fn=create_model, epochs=300, batch_size=16, verbose=0)))

pipeline = Pipeline(estimators)

kfold = StratifiedKFold(n_splits=10, shuffle=True)

outcomes = cross_val_score(pipeline, X, encoded_Y, cv=kfold)

print(“Hidden: %.2f%% (%.2f%%)” % (outcomes.imply()*100, outcomes.std()*100))

That is undoubtedly not a clear and tidy code, and it will likely be an issue for future upkeep. Subsequently, monkey patching is unwelcomed in manufacturing code. Nevertheless, this might be a fast method that exploited the internal mechanism of Python language to get one thing to work simply.

Additional Readings

This part gives extra sources on the subject if you’re trying to go deeper.

Articles

Abstract

On this tutorial, we realized what monkey patching is and the best way to do it. Particularly,

  • We realized the best way to add a member operate to an present object
  • The best way to modify the Python module cache at sys.modules to deceive the import statements



Source_link

Related Posts

Detecting novel systemic biomarkers in exterior eye photographs – Google AI Weblog
Artificial Intelligence

Detecting novel systemic biomarkers in exterior eye photographs – Google AI Weblog

March 27, 2023
‘Nanomagnetic’ computing can present low-energy AI — ScienceDaily
Artificial Intelligence

Robotic caterpillar demonstrates new strategy to locomotion for gentle robotics — ScienceDaily

March 26, 2023
Posit AI Weblog: Phrase Embeddings with Keras
Artificial Intelligence

Posit AI Weblog: Phrase Embeddings with Keras

March 25, 2023
What Are ChatGPT and Its Mates? – O’Reilly
Artificial Intelligence

What Are ChatGPT and Its Mates? – O’Reilly

March 24, 2023
ACL 2022 – Apple Machine Studying Analysis
Artificial Intelligence

Pre-trained Mannequin Representations and their Robustness in opposition to Noise for Speech Emotion Evaluation

March 23, 2023
Studying to develop machine-learning fashions | MIT Information
Artificial Intelligence

Studying to develop machine-learning fashions | MIT Information

March 23, 2023
Next Post
Unattended retail continues to develop

Unattended retail continues to develop

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

POPULAR NEWS

Robotic knee substitute provides abuse survivor hope

Robotic knee substitute provides abuse survivor hope

August 22, 2022
Turkey’s hair transplant robotic is ’straight out a sci-fi film’

Turkey’s hair transplant robotic is ’straight out a sci-fi film’

September 8, 2022
PizzaHQ in Woodland Park NJ modernizes pizza-making with expertise

PizzaHQ in Woodland Park NJ modernizes pizza-making with expertise

July 10, 2022
How CoEvolution robotics software program runs warehouse automation

How CoEvolution robotics software program runs warehouse automation

May 28, 2022
CMR Surgical expands into LatAm with Versius launches underway

CMR Surgical expands into LatAm with Versius launches underway

May 25, 2022

EDITOR'S PICK

Cordless Vacuum Cleaner vs Robotic Vacuum: Which Vacuum Cleaner Ought to You Purchase

Cordless Vacuum Cleaner vs Robotic Vacuum: Which Vacuum Cleaner Ought to You Purchase

August 8, 2022
Google’s New Robotic Realized to Take Orders by Scraping the Internet

Google’s New Robotic Realized to Take Orders by Scraping the Internet

August 16, 2022
Save $240 on a Roborock S7 robotic vacuum

Save $240 on a Roborock S7 robotic vacuum

December 2, 2022
Extra automation is coming to the poultry business

Extra automation is coming to the poultry business

December 21, 2022

About

We bring you the best Premium WordPress Themes that perfect for news, magazine, personal blog, etc. Check our landing page for details.

Follow us

Categories

  • Artificial Intelligence
  • Business
  • Computing
  • Entertainment
  • Fashion
  • Food
  • Gadgets
  • Health
  • Lifestyle
  • National
  • News
  • Opinion
  • Politics
  • Rebotics
  • Science
  • Software
  • Sports
  • Tech
  • Technology
  • Travel
  • Various articles
  • World

Recent Posts

  • MinisForum Launches NAB6 mini-PC With Twin 2.5G Ethernet Ports
  • Thrilling Spy Thriller About Video Recreation
  • What’s the Java Digital Machine (JVM)
  • VMware vSAN 8 Replace 1 for Cloud Companies Suppliers
  • Buy JNews
  • Landing Page
  • Documentation
  • Support Forum

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.

No Result
View All Result
  • Homepages
    • Home Page 1
    • Home Page 2
  • News
  • Politics
  • National
  • Business
  • World
  • Entertainment
  • Fashion
  • Food
  • Health
  • Lifestyle
  • Opinion
  • Science
  • Tech
  • Travel

© 2023 JNews - Premium WordPress news & magazine theme by Jegtheme.

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In