コード例 #1
0
class Stack(Z):
    x = X()
    y = Y()
    
    def __init__(self):
        self.items = []

    def is_Empty(self):
        return self.items == []

    def push(self, item):
        self.items.append(item)

    def pop(self):
        return self.items.pop()

    def size(self):
        return len(self.items)
    
    def stack_1(self):
        self.a.a_1()
        self.b.b_1()

    def stack_2(self, num):
        self.x_1(num)
        self.x.y_2()
        self.y_2()
コード例 #2
0
 def __init__(self, parent=None):
     super(Y_info, self).__init__(parent)
     label = QLabel(self)
     label.setGeometry(QRect(550, 10, 90, 90))
     pixmap = QPixmap('img15.jpg')
     pixmap = pixmap.scaledToWidth(90)
     label.setPixmap(pixmap)
     self.ui = Y.Ui_Form()
     self.ui.setupUi(self)
コード例 #3
0
ファイル: 12.py プロジェクト: Roman-Rudensky/21v-python
import Y
y = Y.init(4)
y1 = Y.value(y, 1)
print y1
コード例 #4
0
Statistical Learning

X(inputs) is usually denoted by a subscript to distinguish them and they
go by many different names; predictors, independent variables,
features, or sometimes just variables.

often denoted as Y(output) are more called as output, ['response and dependent variables.']

Y = ƒ(X) + ε

ƒ(X) ; a fixed but unknown function of X1,...Xp, 
ε ; the random error term, which is independent of X and has a mean of 0.

∴ ƒ represents the systematic information that X provides about Y.

refer to page 17; fig 2.2

ƒ which is generally unknown (can be known when simulated)
- is the function of X and the line that shows the connection
  between X and Y. ( line)

ε is represented by the vertical lines between the  lines and 
point, also known as the error term.

['note: some errors are positive/negative depending on where they lie in ƒ']

```Statistical Learning refers to a set of approaches for estimating ƒ```

    > Why Estimate ƒ? <
    two main reasons; prediction and inference
コード例 #5
0
ファイル: X.py プロジェクト: glareprotector/active_site
import pdb
pdb.set_trace()

import Y
val = 0
Y.f()